id
stringlengths 10
10
| title
stringlengths 12
156
| abstract
stringlengths 279
2.02k
| full_text
dict | qas
dict | figures_and_tables
dict |
|---|---|---|---|---|---|
1909.01013
|
Duality Regularization for Unsupervised Bilingual Lexicon Induction
|
Unsupervised bilingual lexicon induction naturally exhibits duality, which results from symmetry in back-translation. For example, EN-IT and IT-EN induction can be mutually primal and dual problems. Current state-of-the-art methods, however, consider the two tasks independently. In this paper, we propose to train primal and dual models jointly, using regularizers to encourage consistency in back translation cycles. Experiments across 6 language pairs show that the proposed method significantly outperforms competitive baselines, obtaining the best-published results on a standard benchmark.
|
{
"section_name": [
"Introduction",
"Related Work",
"Approach",
"Approach ::: Baseline Adversarial Model",
"Approach ::: Regularizers for Dual Models",
"Approach ::: Model Selection",
"Experiments",
"Experiments ::: Experimental Settings",
"Experiments ::: The Effectiveness of Dual Learning",
"Experiments ::: Comparison with the State-of-the-art",
"Conclusion"
],
"paragraphs": [
[
"Unsupervised bilingual lexicon induction (UBLI) has been shown to benefit NLP tasks for low resource languages, including unsupervised NMT BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, information retrieval BIBREF5, BIBREF6, dependency parsing BIBREF7, and named entity recognition BIBREF8, BIBREF9.",
"Recent research has attempted to induce unsupervised bilingual lexicons by aligning monolingual word vector spaces BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15. Given a pair of languages, their word alignment is inherently a bi-directional problem (e.g. English-Italian vs Italian-English). However, most existing research considers mapping from one language to another without making use of symmetry. Our experiments show that separately learned UBLI models are not always consistent in opposite directions. As shown in Figure 1a, when the model of BIBREF11 Conneau18a is applied to English and Italian, the primal model maps the word “three” to the Italian word “tre”, but the dual model maps “tre” to “two” instead of “three”.",
"We propose to address this issue by exploiting duality, encouraging forward and backward mappings to form a closed loop (Figure 1b). In particular, we extend the model of BIBREF11 Conneau18a by using a cycle consistency loss BIBREF16 to regularize two models in opposite directions. Experiments on two benchmark datasets show that the simple method of enforcing consistency gives better results in both directions. Our model significantly outperforms competitive baselines, obtaining the best published results. We release our code at xxx."
],
[
"UBLI. A typical line of work uses adversarial training BIBREF17, BIBREF10, BIBREF18, BIBREF11, matching the distributions of source and target word embeddings through generative adversarial networks BIBREF19. Non-adversarial approaches have also been explored. For instance, BIBREF15 Mukherjee18EMNLP use squared-loss mutual information to search for optimal cross-lingual word pairing. BIBREF13 and BIBREF20 exploit the structural similarity of word embedding spaces to learn word mappings. In this paper, we choose BIBREF11 Conneau18a as our baseline as it is theoretically attractive and gives strong results on large-scale datasets.",
"Cycle Consistency. Forward-backward consistency has been used to discover the correspondence between unpaired images BIBREF21, BIBREF22. In machine translation, similar ideas were exploited, BIBREF23, BIBREF24 and BIBREF25 use dual learning to train two “opposite” language translators by minimizing the reconstruction loss. BIBREF26 consider back-translation, where a backward model is used to build synthetic parallel corpus and a forward model learns to generate genuine text based on the synthetic output.",
"Closer to our method, BIBREF27 jointly train two autoencoders to learn supervised bilingual word embeddings. BIBREF28 use sinkhorn distance BIBREF29 and back-translation to align word embeddings. However, they cannot perform fully unsupervised training, relying on WGAN BIBREF30 for providing initial mappings. Concurrent with our work, BIBREF31 build a adversarial autoencoder with cycle consistency loss and post-cycle reconstruction loss. In contrast to these works, our method is fully unsupervised, simpler, and empirically more effective."
],
[
"We take BIBREF11 as our baseline, introducing a novel regularizer to enforce cycle consistency. Let $X=\\lbrace x_1,...,x_n\\rbrace $ and $Y=\\lbrace y_1,...,y_m\\rbrace $ be two sets of $n$ and $m$ word embeddings for a source and a target language, respectively. The primal UBLI task aims to learn a linear mapping $\\mathcal {F}:X\\rightarrow Y$ such that for each $x_i$, $\\mathcal {F}(x_i)$ corresponds to its translation in $Y$. Similarly, a linear mapping $\\mathcal {G}:Y\\rightarrow X$ is defined for the dual task. In addition, we introduce two language discriminators $D_x$ and $D_y$, which are trained to discriminate between the mapped word embeddings and the original word embeddings."
],
[
"BIBREF11 align two word embedding spaces through generative adversarial networks, in which two networks are trained simultaneously. Specifically, take the primal UBLI task as an example, the linear mapping $\\mathcal {F}$ tries to generate “fake” word embeddings $\\mathcal {F}(x)$ that look similar to word embeddings from $Y$, while the discriminator $D_y$ aims to distinguish between “fake” and real word embeddings from $Y$. Formally, this idea can be expressed as the minmax game min$_{\\mathcal {F}}$max$_{D{_y}}\\ell _{adv}(\\mathcal {F},D_y,X,Y)$, where",
"$P_{D_y}(src|y_j)$ is a model probability from $D_y$ to distinguish whether word embedding $y_j$ is coming from the target language (src = 1) or the primal mapping $\\mathcal {F}$ (src = 0). Similarly, the dual UBLI problem can be formulated as min$_{\\mathcal {G}}$max$_{D_x}\\ell _{adv}(\\mathcal {G},D_x,Y,X)$, where $\\mathcal {G}$ is the dual mapping, and $D_x$ is a source discriminator.",
"Theoretically, a unique solution for above minmax game exists, with the mapping and the discriminator reaching a nash equilibrium. Since the adversarial training happens at the distribution level, no cross-lingual supervision is required."
],
[
"We train $\\mathcal {F}$ and $\\mathcal {G}$ jointly and introduce two regularizers. Formally, we hope that $\\mathcal {G}(\\mathcal {F}(X))$ is similar to $X$ and $\\mathcal {F}(\\mathcal {G}(Y))$ is similar to $Y$. We implement this constraint as a cycle consistency loss. As a result, the proposed model has two learning objectives: i) an adversarial loss ($\\ell _{adv}$) for each model as in the baseline. ii) a cycle consistency loss ($\\ell _{cycle}$) on each side to avoid $\\mathcal {F}$ and $\\mathcal {G}$ from contradicting each other. The overall architecture of our model is illustrated in Figure FIGREF4.",
"Cycle Consistency Loss. We introduce",
"where $\\Delta $ denotes the discrepancy criterion, which is set as the average cosine similarity in our model.",
"Full objective. The final objective is:"
],
[
"We follow BIBREF11, using an unsupervised criterion to perform model selection. In preliminary experiments, we find in adversarial training that the single-direction criterion $S(\\mathcal {F}, X, Y)$ by BIBREF11 does not always work well. To address this, we make a simple extension by calculating the weighted average of forward and backward scores:",
"Where $\\lambda $ is a hyperparameter to control the importance of the two objectives. Here $S$ first generates bilingual lexicons by learned mappings, and then computes the average cosine similarity of these translations."
],
[
"We perform two sets of experiments, to investigate the effectiveness of our duality regularization in isolation (Section SECREF16) and to compare our final models with the state-of-the-art methods in the literature (Section SECREF18), respectively."
],
[
"Dataset and Setup. Our datasets includes: (i) The Multilingual Unsupervised and Supervised Embeddings (MUSE) dataset released by BIBREF11 Conneau18a. (ii) the more challenging Vecmap dataset from BIBREF32 Dinu15 and the extensions of BIBREF33 Artetxe17ACL. We follow the evaluation setups of BIBREF11, utilizing cross-domain similarity local scaling (CSLS) for retrieving the translation of given source words. Following a standard evaluation practice BIBREF34, BIBREF35, BIBREF11, we report precision at 1 scores (P@1). Given the instability of existing methods, we follow BIBREF13 to perform 10 runs for each method and report the best and the average accuracies."
],
[
"We compare our method with BIBREF11 (Adv-C) under the same settings. As shown in Table TABREF12, our model outperforms Adv-C on both MUSE and Vecmap for all language pairs (except ES-EN). In addition, the proposed approach is less sensitive to initialization, and thus more stable than Adv-C over multiple runs. These results demonstrate the effectiveness of dual learning. Our method is also superior to Adv-C for the low-resource language pairs English $\\leftrightarrow $ Malay (MS) and English $\\leftrightarrow $ English-Esperanto (EO). Adv-C gives low performances on ES-EN, DE-EN, but much better results on the opposite directions on Vecmap. This is likely because the separate models are highly under-constrained, and thus easy to get stuck in poor local optima. In contrast, our method gives comparable results on both directions for the two languages, thanks to the use of information symmetry.",
"Table TABREF13 shows the inconsistency rates of back translation between Adv-C and our method on MUSE. Compared with Adv-C, our model significantly reduces the inconsistency rates on all language pairs, which explains the overall improvement in Table TABREF12. Table TABREF14 gives several word translation examples. In the first three cases, our regularizer successfully fixes back translation errors. In the fourth case, ensuring cycle consistency does not lead to the correct translation, which explains some errors by our system. In the fifth case, our model finds a related word but not the same word in the back translation, due to the use of cosine similarity for regularization."
],
[
"In this section, we compare our model with state-of-the-art systems, including those with different degrees of supervision. The baselines include: (1) Procrustes BIBREF11, which learns a linear mapping through Procrustes Analysis BIBREF36. (2) GPA BIBREF37, an extension of Procrustes Analysis. (3) GeoMM BIBREF38, a geometric approach which learn a Mahalanobis metric to refine the notion of similarity. (4) GeoMM$_{semi}$, iterative GeoMM with weak supervision. (5) Adv-C-Procrustes BIBREF11, which refines the mapping learned by Adv-C with iterative Procrustes, which learns the new mapping matrix by constructing a bilingual lexicon iteratively. (6) Unsup-SL BIBREF13, which integrates a weak unsupervised mapping with a robust self-learning. (7) Sinkhorn-BT BIBREF28, which combines sinkhorn distance BIBREF29 and back-translation. For fair comparison, we integrate our model with two iterative refinement methods (Procrustes and GeoMM$_{semi}$).",
"Table TABREF15 shows the final results on Vecmap. We first compare our model with the state-of-the-art unsupervised methods. Our model based on procrustes (Ours-Procrustes) outperforms Sinkhorn-BT on all test language pairs, and shows better performance than Adv-C-Procrustes on most language pairs. Adv-C-Procrustes gives very low precision on DE-EN, FI-EN and ES-EN, while Ours-Procrustes obtains reasonable results consistently. A possible explanation is that dual learning is helpful for providing good initiations, so that the procrustes solution is not likely to fall in poor local optima. The reason why Unsup-SL gives strong results on all language pairs is that it uses a robust self-learning framework, which contains several techniques to avoid poor local optima.",
"Additionally, we observe that our unsupervised method performs competitively and even better compared with strong supervised and semi-supervised approaches. Ours-Procrustes obtains comparable results with Procrustes on EN-IT and gives strong results on EN-DE, EN-FI, EN-ES and the opposite directions. Ours-GeoMM$_{semi}$ obtains the state-of-the-art results on all tested language pairs except EN-FI, with the additional advantage of being fully unsupervised."
],
[
"We investigated a regularization method to enhance unsupervised bilingual lexicon induction, by encouraging symmetry in lexical mapping between a pair of word embedding spaces. Results show that strengthening bi-directional mapping consistency significantly improves the effectiveness over the state-of-the-art method, leading to the best results on a standard benchmark."
]
]
}
|
{
"question": [
"What regularizers were used to encourage consistency in back translation cycles?",
"What are new best results on standard benchmark?",
"How better is performance compared to competitive baselines?",
"How big is data used in experiments?",
"What 6 language pairs is experimented on?",
"What are current state-of-the-art methods that consider the two tasks independently?"
],
"question_id": [
"3a8d65eb8e1dbb995981a0e02d86ebf3feab107a",
"d0c79f4a5d5c45fe673d9fcb3cd0b7dd65df7636",
"54c7fc08598b8b91a8c0399f6ab018c45e259f79",
"5112bbf13c7cf644bf401daecb5e3265889a4bfc",
"03ce42ff53aa3f1775bc57e50012f6eb1998c480",
"ebeedbb8eecdf118d543fdb5224ae610eef212c8"
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero",
"zero",
"zero"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"an adversarial loss ($\\ell _{adv}$) for each model as in the baseline",
"a cycle consistency loss ($\\ell _{cycle}$) on each side"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We train $\\mathcal {F}$ and $\\mathcal {G}$ jointly and introduce two regularizers. Formally, we hope that $\\mathcal {G}(\\mathcal {F}(X))$ is similar to $X$ and $\\mathcal {F}(\\mathcal {G}(Y))$ is similar to $Y$. We implement this constraint as a cycle consistency loss. As a result, the proposed model has two learning objectives: i) an adversarial loss ($\\ell _{adv}$) for each model as in the baseline. ii) a cycle consistency loss ($\\ell _{cycle}$) on each side to avoid $\\mathcal {F}$ and $\\mathcal {G}$ from contradicting each other. The overall architecture of our model is illustrated in Figure FIGREF4."
],
"highlighted_evidence": [
"We train $\\mathcal {F}$ and $\\mathcal {G}$ jointly and introduce two regularizers. Formally, we hope that $\\mathcal {G}(\\mathcal {F}(X))$ is similar to $X$ and $\\mathcal {F}(\\mathcal {G}(Y))$ is similar to $Y$. We implement this constraint as a cycle consistency loss. As a result, the proposed model has two learning objectives: i) an adversarial loss ($\\ell _{adv}$) for each model as in the baseline. ii) a cycle consistency loss ($\\ell _{cycle}$) on each side to avoid $\\mathcal {F}$ and $\\mathcal {G}$ from contradicting each other."
]
}
],
"annotation_id": [
"b9a984425cbc2d5d4e9ee47b1389f956badcb464"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "New best results of accuracy (P@1) on Vecmap:\nOurs-GeoMMsemi: EN-IT 50.00 IT-EN 42.67 EN-DE 51.60 DE-EN 47.22 FI-EN 39.62 EN-ES 39.47 ES-EN 36.43",
"evidence": [
"Table TABREF15 shows the final results on Vecmap. We first compare our model with the state-of-the-art unsupervised methods. Our model based on procrustes (Ours-Procrustes) outperforms Sinkhorn-BT on all test language pairs, and shows better performance than Adv-C-Procrustes on most language pairs. Adv-C-Procrustes gives very low precision on DE-EN, FI-EN and ES-EN, while Ours-Procrustes obtains reasonable results consistently. A possible explanation is that dual learning is helpful for providing good initiations, so that the procrustes solution is not likely to fall in poor local optima. The reason why Unsup-SL gives strong results on all language pairs is that it uses a robust self-learning framework, which contains several techniques to avoid poor local optima.",
"FLOAT SELECTED: Table 4: Accuracy (P@1) on Vecmap. The best results are bolded. †Results as reported in the original paper. For unsupervised methods, we report the average accuracy across 10 runs."
],
"highlighted_evidence": [
"Table TABREF15 shows the final results on Vecmap.",
"FLOAT SELECTED: Table 4: Accuracy (P@1) on Vecmap. The best results are bolded. †Results as reported in the original paper. For unsupervised methods, we report the average accuracy across 10 runs."
]
}
],
"annotation_id": [
"0e8bac71d1d4d344b19e68d3a517f0602009c7b8"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Proposed method vs best baseline result on Vecmap (Accuracy P@1):\nEN-IT: 50 vs 50\nIT-EN: 42.67 vs 42.67\nEN-DE: 51.6 vs 51.47\nDE-EN: 47.22 vs 46.96\nEN-FI: 35.88 vs 36.24\nFI-EN: 39.62 vs 39.57\nEN-ES: 39.47 vs 39.30\nES-EN: 36.43 vs 36.06",
"evidence": [
"FLOAT SELECTED: Table 4: Accuracy (P@1) on Vecmap. The best results are bolded. †Results as reported in the original paper. For unsupervised methods, we report the average accuracy across 10 runs.",
"Table TABREF15 shows the final results on Vecmap. We first compare our model with the state-of-the-art unsupervised methods. Our model based on procrustes (Ours-Procrustes) outperforms Sinkhorn-BT on all test language pairs, and shows better performance than Adv-C-Procrustes on most language pairs. Adv-C-Procrustes gives very low precision on DE-EN, FI-EN and ES-EN, while Ours-Procrustes obtains reasonable results consistently. A possible explanation is that dual learning is helpful for providing good initiations, so that the procrustes solution is not likely to fall in poor local optima. The reason why Unsup-SL gives strong results on all language pairs is that it uses a robust self-learning framework, which contains several techniques to avoid poor local optima."
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 4: Accuracy (P@1) on Vecmap. The best results are bolded. †Results as reported in the original paper. For unsupervised methods, we report the average accuracy across 10 runs.",
"Table TABREF15 shows the final results on Vecmap."
]
}
],
"annotation_id": [
"208ff0e360529ceb1220d1c11abc0b48d2208cd3"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"55e2519b0e80ebeca6f4334336688963a9a7da25"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "EN<->ES\nEN<->DE\nEN<->IT\nEN<->EO\nEN<->MS\nEN<->FI",
"evidence": [
"Table TABREF13 shows the inconsistency rates of back translation between Adv-C and our method on MUSE. Compared with Adv-C, our model significantly reduces the inconsistency rates on all language pairs, which explains the overall improvement in Table TABREF12. Table TABREF14 gives several word translation examples. In the first three cases, our regularizer successfully fixes back translation errors. In the fourth case, ensuring cycle consistency does not lead to the correct translation, which explains some errors by our system. In the fifth case, our model finds a related word but not the same word in the back translation, due to the use of cosine similarity for regularization.",
"FLOAT SELECTED: Table 1: Accuracy on MUSE and Vecmap."
],
"highlighted_evidence": [
"Compared with Adv-C, our model significantly reduces the inconsistency rates on all language pairs, which explains the overall improvement in Table TABREF12.",
"FLOAT SELECTED: Table 1: Accuracy on MUSE and Vecmap."
]
}
],
"annotation_id": [
"259abfe9d7fa091be049c2554871e822c006e168"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Procrustes",
"GPA",
"GeoMM",
"GeoMM$_{semi}$",
"Adv-C-Procrustes",
"Unsup-SL",
"Sinkhorn-BT"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In this section, we compare our model with state-of-the-art systems, including those with different degrees of supervision. The baselines include: (1) Procrustes BIBREF11, which learns a linear mapping through Procrustes Analysis BIBREF36. (2) GPA BIBREF37, an extension of Procrustes Analysis. (3) GeoMM BIBREF38, a geometric approach which learn a Mahalanobis metric to refine the notion of similarity. (4) GeoMM$_{semi}$, iterative GeoMM with weak supervision. (5) Adv-C-Procrustes BIBREF11, which refines the mapping learned by Adv-C with iterative Procrustes, which learns the new mapping matrix by constructing a bilingual lexicon iteratively. (6) Unsup-SL BIBREF13, which integrates a weak unsupervised mapping with a robust self-learning. (7) Sinkhorn-BT BIBREF28, which combines sinkhorn distance BIBREF29 and back-translation. For fair comparison, we integrate our model with two iterative refinement methods (Procrustes and GeoMM$_{semi}$)."
],
"highlighted_evidence": [
"In this section, we compare our model with state-of-the-art systems, including those with different degrees of supervision. The baselines include: (1) Procrustes BIBREF11, which learns a linear mapping through Procrustes Analysis BIBREF36. (2) GPA BIBREF37, an extension of Procrustes Analysis. (3) GeoMM BIBREF38, a geometric approach which learn a Mahalanobis metric to refine the notion of similarity. (4) GeoMM$_{semi}$, iterative GeoMM with weak supervision. (5) Adv-C-Procrustes BIBREF11, which refines the mapping learned by Adv-C with iterative Procrustes, which learns the new mapping matrix by constructing a bilingual lexicon iteratively. (6) Unsup-SL BIBREF13, which integrates a weak unsupervised mapping with a robust self-learning. (7) Sinkhorn-BT BIBREF28, which combines sinkhorn distance BIBREF29 and back-translation."
]
}
],
"annotation_id": [
"a2a38b25d3dca1acd3bc852e88bb4ee8038f3cee"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
}
|
{
"caption": [
"Figure 1: (a) Inconsistency between primal model F and the dual model G. (b) An ideal scenario.",
"Figure 2: The proposed framework. (a)X → F(X)→ G(F(X))→ X; (b) Y → G(Y )→ F(G(Y ))→ Y .",
"Table 1: Accuracy on MUSE and Vecmap.",
"Table 4: Accuracy (P@1) on Vecmap. The best results are bolded. †Results as reported in the original paper. For unsupervised methods, we report the average accuracy across 10 runs."
],
"file": [
"1-Figure1-1.png",
"2-Figure2-1.png",
"3-Table1-1.png",
"4-Table4-1.png"
]
}
|
1901.02534
|
Team Papelo: Transformer Networks at FEVER
|
We develop a system for the FEVER fact extraction and verification challenge that uses a high precision entailment classifier based on transformer networks pretrained with language modeling, to classify a broad set of potential evidence. The precision of the entailment classifier allows us to enhance recall by considering every statement from several articles to decide upon each claim. We include not only the articles best matching the claim text by TFIDF score, but read additional articles whose titles match named entities and capitalized expressions occurring in the claim text. The entailment module evaluates potential evidence one statement at a time, together with the title of the page the evidence came from (providing a hint about possible pronoun antecedents). In preliminary evaluation, the system achieves .5736 FEVER score, .6108 label accuracy, and .6485 evidence F1 on the FEVER shared task test set.
|
{
"section_name": [
"Introduction",
"Transformer network",
"Reframing entailment",
"Improving retrieval",
"Discussion"
],
"paragraphs": [
[
"The release of the FEVER fact extraction and verification dataset BIBREF0 provides a large-scale challenge that tests a combination of retrieval and textual entailment capabilities. To verify a claim in the dataset as supported, refuted, or undecided, a system must retrieve relevant articles and sentences from Wikipedia. Then it must decide whether each of those sentences, or some combination of them, entails or refutes the claim, which is an entailment problem. Systems are evaluated on the accuracy of the claim predictions, with credit only given when correct evidence is submitted.",
"As entailment data, premises in FEVER data differ substantially from those in the image caption data used as the basis for the Stanford Natural Language Inference (SNLI) BIBREF1 dataset. Sentences are longer (31 compared to 14 words on average), vocabulary is more abstract, and the prevalence of named entities and out-of-vocabulary terms is higher.",
"The retrieval aspect of FEVER is not straightforward either. A claim may have small word overlap with the relevant evidence, especially if the claim is refuted by the evidence.",
"Our approach to FEVER is to fix the most obvious shortcomings of the baseline approaches to retrieval and entailment, and to train a sharp entailment classifier that can be used to filter a broad set of retrieved potential evidence. For the entailment classifier we compare Decomposable Attention BIBREF2 , BIBREF3 as implemented in the official baseline, ESIM BIBREF4 , and a transformer network with pre-trained weights BIBREF5 . The transformer network naturally supports out-of-vocabulary words and gives substantially higher performance than the other methods."
],
[
"The core of our system is an entailment module based on a transformer network. Transformer networks BIBREF6 are deep networks applied to sequential input data, with each layer implementing multiple heads of scaled dot product attention. This attention mechanism allows deep features to be compared across positions in the input.",
"Many entailment networks have two sequence inputs, but the transformer is designed with just one. A separator token divides the premise from the hypothesis.",
"We use a specific transformer network released by OpenAI BIBREF5 that has been pre-trained for language modeling. The network consists of twelve blocks. Each block consists of a multi-head masked self-attention layer, layer normalization BIBREF7 , a feed forward network, and another layer normalization. After the twelfth block, two branches exist. In one branch, matrix multiplication and softmax layers are applied at the terminal sequence position to predict the entailment classification. In the other branch, a hidden state is multiplied by each token embedding and a softmax is taken to predict the next token. The language modeling branch has been pre-trained on the BookCorpus dataset BIBREF8 . We take the pre-trained model and train both branches on examples from FEVER."
],
[
"The baseline FEVER system BIBREF0 ran the AllenNLP BIBREF3 implementation of Decomposable Attention BIBREF2 to classify a group of five premise statements concatenated together against the claim. These five premise statements were fixed by the retrieval module and not considered individually. In our system, premise statements are individually evaluated.",
"We collect training data as the five sentences with the highest TFIDF score against the claim, taken from the Wikipedia pages selected by the retrieval module. If any ground truth evidence group for a claim requires more than one sentence, the claim is dropped from the training set. Otherwise, each sentence is labeled with the truth value of the claim if it is in the ground truth evidence set, and labeled as neutral if not. The resulting data forms an entailment problem that we call “FEVER One.” For comparison, we form “FEVER Five” and “FEVER Five Oracle” by concatenating all five retrieved sentences, as in the baseline. In FEVER Five Oracle, the ground truth is the claim ground truth (if verifiable), but in FEVER Five, ground truth depends on whether the retrieved evidence is in the ground truth evidence set.",
"Several FEVER claims require multiple statements as evidence in order to be supported or refuted. The number of such claims is relatively small: in the first half of the development set, only 623 of 9999 claims were verifiable and had no singleton evidence groups. Furthermore, we disagreed with many of these annotations and thought that less evidence should have sufficed. Thus we chose not to develop a strategy for multiple evidence statements.",
"To compare results on FEVER Five to FEVER One, we must aggregate decisions about individual sentences of possible evidence to a decision about the claim. We do this by applying the following rules:",
"We resolve conflicts between supporting and refuting information in favor of the supporting information, because we observed cases in the development data where information was retrieved for different entities with the same name. For example, Ann Richards appeared both as a governor of Texas and as an Australian actress. Information that would be a contradiction regarding the actress should not stop evidence that would support a claim about the politician.",
"Even if a sentence is in the evidence set, it might not be possible for the classifier to correctly determine whether it supports the claim, because the sentence could have pronouns with antecedents outside the given sentence. Ideally, a coreference resolution system could add this information to the sentence, but running one could be time consuming and introduce its own errors. As a cheap alternative, we make the classifier aware of the title of the Wikipedia page. We convert any undersores in the page title to spaces, and insert the title between brackets before the rest of each premise sentence. The dataset constructed in this way is called “FEVER Title One.”",
"The FEVER baseline system works by solving FEVER Five Oracle. Using Decomposable Attention, it achieves .505 accuracy on the test half of the development set. Swapping in the Enhanced Sequential Inference Model (ESIM) BIBREF4 to solve FEVER Five Oracle results in an accuracy of .561. Because ESIM uses a single out-of-vocabulary (OOV) token for all unknown words, we expect it to confuse named entities. Thus we extend the model by allocating 10,000 indices for out-of-vocabulary words with randomly initialized embeddings, and taking a hash of each OOV word to select one of these indices. With extended ESIM, the accuracy is .586. Therefore, we run most later comparisons with extended ESIM or transformer networks as the entailment module, rather than Decomposable Attention.",
"The FEVER One dataset is highly unbalanced in favor of neutral statements, so that the majority class baseline would achieve 93.0% on this data. In fact it makes training ESIM a challenge, as the model only learns the trivial majority class predictor if the natural training distribution is followed. We reweight the examples in FEVER One for ESIM so that each class contributes to the loss equally. Then, we use Cohen's Kappa rather than the accuracy to evaluate a model's quality, so that following the bias with purely random agreement is not rewarded in the evaluation. In Table 1 we compare FEVER One to FEVER Title One, both at the level of classifying individual support statements and of classifying the claim by aggregating these decisions as described above. On a support basis, we find a 52% increase in Kappa by adding the titles.",
"When ESIM is replaced by the transformer network, class reweighting is not necessary. The network naturally learns to perform in excess of the majority class baseline. Cohen's Kappa is 68% higher than that for ESIM. The possibility of training on oracle labels for a concatenated set of evidence allows a classifier to simply guess whether the hypothesis is true and supported somewhere, rather than having to consider the relationship between hypothesis and premise. For example, it is possible to classify 67% of SNLI examples correctly without reading the premise BIBREF9 . As we show in Table 2 , for ESIM, we find that this kind of guessing makes the FEVER Title Five Oracle performance better than FEVER Title Five. The Transformer model is accurate enough that oracle guessing does not help. Both models perform best when classifying each bit of evidence separately and then aggregating."
],
[
"Regardless of how strong the entailment classifier is, FEVER score is limited by whether the document and sentence retrieval modules, which produce the input to the entailment classifier, find the right evidence. In Table 3 , we examine the percentage of claims for which correct evidence is retrieved, before filtering with the entailment classifier. For this calculation, we skip any claim with an evidence group with multiple statements, and count a claim as succesfully retrieved if it is not verifiable or if the statement in one of the evidence groups is retrieved. The baseline system retrieves the five articles with the highest TFIDF score, and then extracts the five sentences from that collection with the highest TFIDF score against the claim. It achieves 66.1% evidence retrieval.",
"Our first modification simply adds the title to each premise statement when computing its TFIDF against the claim, so that statements from a relevant article get credit even if the subject is not repeated. This raises evidence retrieval to 68.3%.",
"A more significant boost comes from retrieving additional Wikipedia pages based on named entity recognition (NER). We start with phrases tagged as named entities by SpaCy BIBREF10 , but these tags are not very reliable, so we include various capitalized phrases. We retrieve Wikipedia pages whose title exactly matches one of these phrases.",
"The named entity retrieval strategy boosts the evidence retrieval rate to 80.8%, while less than doubling the processing time. However, sometimes the named entity page thus retrieved is only a Wikipedia disambiguation page with no useful information. Noticing a lot of questions about films in the development set, we modify the strategy to also retrieve a page titled “X (film)” if it exists, whenever “X” is retrieved. The film retrievals raise evidence retrieval to 81.2%.",
"Finally, we eliminate the TFIDF sentence ranking to expand sentence retrieval from five sentences to entire articles, up to the first fifty sentences from each. Thus we obtain 2.6 million statements to classify regarding the 19,998 claims in the shared task development set, for an average of 128 premises per claim. The evidence retrieval rate, including all these premises, increases to 90.1%. We continue to apply the entailment module trained with only five premise retrievals. Running the entailment module on this batch using a machine with three NVIDIA GeForce GTX 1080Ti GPU cards takes on the order of six hours.",
"Retrieving more than five sentences means that we can no longer submit all retrieved evidence as support for the claims. Instead, we follow the aggregation strategy from Section \"Reframing entailment\" to decide the claim label, and only submit statements whose classification matches. Limiting evidence in this way when only five statements are retrieved (“narrow evidence” in Table 4 ) pushes FEVER score down very little, to .5550 from .5617 on the development set, so we have confidence that the extra retrieval will make up for the loss. Indeed, when the system reviews the extra evidence, FEVER score goes up to .5844 on the development set.",
"Table 4 compares the end-to-end performance of systems that evaluate five retrieved statements together, evaluate five retrieved statements separately, and evaluate all statements from entire articles separately. Evaluating the statements separately gives better performance. We submit the systems that retrieve five statements and entire articles for evaluation on the test set, achieving preliminary FEVER scores of .5539 and .5736 respectively (label accuracy of .5754 and .6108, evidence recall of .6245 and .5002, evidence F1 of .2542 and .6485). In preliminary standings, the latter system ranks fourth in FEVER score and first in evidence F1."
],
[
"Our approach to FEVER involves a minimum of heuristics and relies mainly on the strength of the Transformer Network based entailment classification. The main performance gains come from adding retrievals that resolve named entities rather than matching the claim text only, filtering fewer of the retrievals, and making the entailment classifier somewhat aware of the topic of what it is reading by including the title. If higher quality and more plentiful multi-evidence claims would be constructed, it would be nice to incorporate dynamic retrievals into the system, allowing the classifier to decide that it needs more information about keywords it encountered during reading."
]
]
}
|
{
"question": [
"How big is their training set?",
"What baseline do they compare to?",
"Which pre-trained transformer do they use?",
"What is the FEVER task?"
],
"question_id": [
"9efd025cfa69c6ff2777528bd158f79ead9353d1",
"559c1307610a15427caeb8aff4d2c01ae5c9de20",
"4ecb6674bcb4162bf71aea8d8b82759255875df3",
"eacc1eb65daad055df934e0e878f417b73b2ecc1"
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"search_query": [
"transformer",
"transformer",
"transformer",
"transformer"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"0efbcf10ffd60b7ac765e797acb4188b6fb548c7"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"For the entailment classifier we compare Decomposable Attention BIBREF2 , BIBREF3 as implemented in the official baseline, ESIM BIBREF4 , and a transformer network with pre-trained weights BIBREF5 ."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Our approach to FEVER is to fix the most obvious shortcomings of the baseline approaches to retrieval and entailment, and to train a sharp entailment classifier that can be used to filter a broad set of retrieved potential evidence. For the entailment classifier we compare Decomposable Attention BIBREF2 , BIBREF3 as implemented in the official baseline, ESIM BIBREF4 , and a transformer network with pre-trained weights BIBREF5 . The transformer network naturally supports out-of-vocabulary words and gives substantially higher performance than the other methods."
],
"highlighted_evidence": [
"Our approach to FEVER is to fix the most obvious shortcomings of the baseline approaches to retrieval and entailment, and to train a sharp entailment classifier that can be used to filter a broad set of retrieved potential evidence. For the entailment classifier we compare Decomposable Attention BIBREF2 , BIBREF3 as implemented in the official baseline, ESIM BIBREF4 , and a transformer network with pre-trained weights BIBREF5 . The transformer network naturally supports out-of-vocabulary words and gives substantially higher performance than the other methods."
]
}
],
"annotation_id": [
"dca8d216296bceafacb89fa8c0e8e3404ad2f298"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"BIBREF5"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Our approach to FEVER is to fix the most obvious shortcomings of the baseline approaches to retrieval and entailment, and to train a sharp entailment classifier that can be used to filter a broad set of retrieved potential evidence. For the entailment classifier we compare Decomposable Attention BIBREF2 , BIBREF3 as implemented in the official baseline, ESIM BIBREF4 , and a transformer network with pre-trained weights BIBREF5 . The transformer network naturally supports out-of-vocabulary words and gives substantially higher performance than the other methods."
],
"highlighted_evidence": [
"For the entailment classifier we compare Decomposable Attention BIBREF2 , BIBREF3 as implemented in the official baseline, ESIM BIBREF4 , and a transformer network with pre-trained weights BIBREF5 . "
]
}
],
"annotation_id": [
"dfd6ac4bdae8afaa4796cb91975e84117cd7f088"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"tests a combination of retrieval and textual entailment capabilities. To verify a claim in the dataset as supported, refuted, or undecided, a system must retrieve relevant articles and sentences from Wikipedia. Then it must decide whether each of those sentences, or some combination of them, entails or refutes the claim, which is an entailment problem"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The release of the FEVER fact extraction and verification dataset BIBREF0 provides a large-scale challenge that tests a combination of retrieval and textual entailment capabilities. To verify a claim in the dataset as supported, refuted, or undecided, a system must retrieve relevant articles and sentences from Wikipedia. Then it must decide whether each of those sentences, or some combination of them, entails or refutes the claim, which is an entailment problem. Systems are evaluated on the accuracy of the claim predictions, with credit only given when correct evidence is submitted.",
"As entailment data, premises in FEVER data differ substantially from those in the image caption data used as the basis for the Stanford Natural Language Inference (SNLI) BIBREF1 dataset. Sentences are longer (31 compared to 14 words on average), vocabulary is more abstract, and the prevalence of named entities and out-of-vocabulary terms is higher.",
"The retrieval aspect of FEVER is not straightforward either. A claim may have small word overlap with the relevant evidence, especially if the claim is refuted by the evidence."
],
"highlighted_evidence": [
"The release of the FEVER fact extraction and verification dataset BIBREF0 provides a large-scale challenge that tests a combination of retrieval and textual entailment capabilities. To verify a claim in the dataset as supported, refuted, or undecided, a system must retrieve relevant articles and sentences from Wikipedia. Then it must decide whether each of those sentences, or some combination of them, entails or refutes the claim, which is an entailment problem. Systems are evaluated on the accuracy of the claim predictions, with credit only given when correct evidence is submitted.",
"As entailment data, premises in FEVER data differ substantially from those in the image caption data used as the basis for the Stanford Natural Language Inference (SNLI) BIBREF1 dataset. Sentences are longer (31 compared to 14 words on average), vocabulary is more abstract, and the prevalence of named entities and out-of-vocabulary terms is higher.",
"The retrieval aspect of FEVER is not straightforward either. A claim may have small word overlap with the relevant evidence, especially if the claim is refuted by the evidence."
]
}
],
"annotation_id": [
"723b977f0074c3cb287db7a362930b75459cfc32"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
]
}
|
{
"caption": [
"Table 1: Effect of adding titles to premises.",
"Table 2: Concatenating evidence or not.",
"Table 3: Percentage of evidence retrieved from first half of development set. Single-evidence claims only.",
"Table 4: FEVER Score of various systems. All use NE+Film retrieval."
],
"file": [
"3-Table1-1.png",
"3-Table2-1.png",
"3-Table3-1.png",
"3-Table4-1.png"
]
}
|
2004.04435
|
Automatic Differentiation in ROOT
|
In mathematics and computer algebra, automatic differentiation (AD) is a set of techniques to evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.), elementary functions (exp, log, sin, cos, etc.) and control flow statements. AD takes source code of a function as input and produces source code of the derived function. By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program. This paper presents AD techniques available in ROOT, supported by Cling, to produce derivatives of arbitrary C/C++ functions through implementing source code transformation and employing the chain rule of differential calculus in both forward mode and reverse mode. We explain its current integration for gradient computation in TFormula. We demonstrate the correctness and performance improvements in ROOT's fitting algorithms.
|
{
"section_name": [
"Introduction",
"Background",
"Background ::: AD and its Modes",
"Background ::: AD Implementations",
"Architecture and Implementation",
"Results",
"Results ::: Accuracy",
"Results ::: Performance",
"Results ::: Performance in TFormula",
"Conclusion",
"Acknowledgments"
],
"paragraphs": [
[
"Accurate and efficient computation of derivatives is vital for a wide variety of computing applications, including numerical optimization, solution of nonlinear equations, sensitivity analysis, and nonlinear inverse problems. Virtually every process could be described with a mathematical function, which can be thought of as an association between elements from different sets. Derivatives track how a varying quantity depends on another quantity, for example how the position of a planet varies as time varies.",
"Derivatives and gradients (vectors of partial derivatives of multivariable functions) allow us to explore the properties of a function and thus the described process as a whole. Gradients are an essential component in gradient-based optimization methods, which have become more and more important in recent years, in particular with its application training of (deep) neural networks BIBREF0.",
"Several different techniques are commonly used to compute the derivatives of a given function, either exactly or approximately BIBREF1, BIBREF0, BIBREF2. The most prevalent techniques are:",
"Numerical differentiation, based on the finite difference method, provides a way to evaluate derivatives approximately. While simple, numerical differentiation can be slow (the run-time complexity grows linearly with the number of input variables) and may have problems with accuracy due to round-off and truncation errors.",
"Symbolic differentiation, based on transformations of symbolic expressions of functions, provides exact closed-form expressions for the derivatives. It faces difficulties when the function to be differentiated is not available in a closed form, which is often the case for computer programs which may contain control flow. Symbolic differentiation can produce derivative expressions that are computationally expensive to evaluate due to difficulties in exploiting common subexpressions.",
"Automatic differentiation (AD) computes derivatives accurately to the precision of the original function, supports control flow and uses at most a small constant factor more time and space than it takes to evaluate the original function, at the expense of increased implementation complexity and introducing more software dependencies.",
"Numerical and symbolic differentiation methods are slow at computing gradients of functions with many input variables, as is often needed for gradient-based optimization algorithms. Both methods have problems calculating higher-order derivatives, where the complexity and errors due to numerical precision increase. Automatic differentiation largely avoids the problems of numerical and symbolic differentiation.",
"In this paper, we describe the implementation of automatic differentiation techniques in ROOT, which is the data analysis framework broadly used High-Energy Physics BIBREF3. This implementation is based on Clad BIBREF4, BIBREF5, which is an automatic differentiation plugin for computation expressed in C/C++."
],
[
"Here, we briefly discuss main algorithmic and implementation principles behind AD. An in-depth overview and more formal description can be found in BIBREF1 and BIBREF2, respectively."
],
[
"AD is based on the decomposition of the procedure (e.g. a source code that computes the original function) into a sequence of simple mathematical operations (e.g. $+, -, *, /, \\sin , \\cos , \\exp $) that can be expressed using a series of intermediate results. Subsequently, derivatives of every intermediate result are evaluated and combined via the chain rule of calculus to obtain the derivatives of the whole sequence. The control flow (e.g. branches, loops) can be incorporated by differentiating the control flow of the original function during the derivative evaluation. Two main modes of AD, which differ in the order of application of the chain rule, are used:",
"Forward mode operates in a top-down approach and computes the derivative of every intermediate result with respect to a single selected input variable of the function. As soon as a final result of the function is reached, the partial derivative with respect to the selected input is available. A single evaluation of the forward mode can only compute partial derivatives with respect to a single input variable. Thus, when the whole gradient is required, forward mode must be invoked once per every input variable, leading to $m \\cdot c_{F} \\cdot n$ runtime complexity, where $m$ is the number of input variables, $n$ is the algorithmic complexity of the original function and $c_{F} < 3 $ is a small constant factor overhead of a single invocation of the forward mode BIBREF2.",
"Reverse mode operates in a bottom-up approach and computes the derivative of a function's output with respect to every intermediate result. Once every input variable of the function is reached, the whole gradient of an output is available. Note that, independently on the number of input variables $N$, a single evaluation of the reverse mode is sufficient to get the whole gradient of a function's output, leading to $c_{R} \\cdot n$ runtime complexity, where $n$ is the complexity of the original function and $c_{R} \\le 4$ is a small constant factor overhead BIBREF2. This is a huge advantage in settings with a single scalar output and many inputs, which is often the case in machine-learning problems where $N >> 10^6$ that makes the forward mode infeasible. As a disadvantage, reverse mode implementations are more complicated, and dynamic memory allocations may be required when dynamic control flow is involved. Depending on the original function, this may cause a single evaluation of the reverse mode to be somewhat slower compared to a single evaluation of the forward mode."
],
[
"AD techniques have been implemented in a variety of programming languages and paradigms, ranging from classical tools for Fortran BIBREF6 and C BIBREF7, to recent active work on tools specific to machine-learning applications BIBREF8, BIBREF9, and modern general-purpose programming languages BIBREF10, BIBREF11. We refer the reader to www.autodiff.org for a comprehensive list of available AD implementations for various languages.",
"In particular, several implementations exist for C++, e.g. BIBREF12, BIBREF13, BIBREF14. Majority of implementations of AD fall into one of the two categories of implementation techniques:",
"Tools based on operator overloading utilize features of programming languages like C++ and Python to define custom types and overload mathematical operators (e.g. +, -, *, /) and functions (e.g. $\\exp , \\sin , \\cos $) on them. Such implementations are often based on custom AD-enabled types that wrap values of both the original and derivative functions and redefine operators to simultaneously act on original and derivative values. In C++, such tools are often implemented as a library that introduces templated differentiable types and corresponding mathematical operations. Then, functions called on the custom type return both original and derivative values. This is a powerful technique but has two primary limitations: legacy code and performance. Functions must be either polymorphic (templated) or explicitly defined on AD-enabled type to be differentiated. Differentiation of pre-existing source code using builtin types such as double and float is not possible. Users are required to use additional level of abstraction in the form of library-specific types instead of first-class language features. Moreover, the performance of the derivative generation can be suboptimal due to the C++ metaprogramming system which usually constructs deep template instantiation chains. Performance can be even more problematic when creating a higher order derivatives.",
"Tools based on source transformation analyze the source code of the original function and build another source code for the derivative function. Such techniques typically accept and generate any code using built-in features of the original language and do not require custom libraries. On the other hand, they require an additional pass over the source file to analyze and generate derivative code. Source transformation can fully utilize source-level optimizations and has reasonably good performance. Implementation is more complicated and it is problematic to achieve full coverage of C++ language features. While full integration with a compiler can make AD a first-class language feature that is transparent for the user, most current implementations for C++ are based on custom parsers that do not have full coverage of the vast variety of C++ language constructs and require a separate step before compilation."
],
[
"Automatic differentiation in ROOT is based on Clad BIBREF4, BIBREF5. Clad is a source transformation AD tool for C++. It is based on LLVM compiler infrastructure BIBREF15 and is implemented as a plugin for C++ compiler Clang, which allows Clad to be transparently integrated into the compilation phase and to utilize large parts of the compiler. Clad relies on Clang's parsing and code generation functionality and can differentiate complicated C++ constructs. Clad supports both forward and reverse mode. It is available as a standalone Clang plugin that, when attached to the compiler, produces derivatives in the compilation phase.",
"On top of that, Clad is integrated directly into ROOT to provide AD functionality as an integral part of the framework. ROOT has a C++ interpreter Cling BIBREF16 which is built on the top of LLVM and Clang. This allows Clad to be attached to Cling as a plugin in a similar way as it can be attached to Clang. In this section, we discuss 1) architecture of Clad and its interaction with Cling; and 2) details of its integration into ROOT.",
"Clad operates on Clang AST (abstract syntax tree) by analyzing the AST of the original function and generating the AST of the derivative. Clad provides two API functions: clad::differentiate for forward mode and clad::gradient for reverse mode, which can be used directly in the source code to mark a function for differentiation (see BIBREF5 for more details on usage and code examples).",
"The information flow of interactions with Cling during differentiation (Figure FIGREF13) is:",
"A function is marked for differentiation with the C++ construct clad::differentiate or clad::gradient (step 1).",
"Cling in ROOT performs incremental compilation and receives an abstract syntax tree (AST) representation of the code (step 2).",
"Cling detects the differentiation marker and sends the AST of the original function to Clad, which transforms the AST to produce the AST of the derivative (step 3).",
"Clad returns the derivative AST to Cling for code generation and execution by the low level LLVM primitives (steps 4, 5, 6, 7). Alternatively, if Clad was configured for non-interactive use, the generated AST can be converted to a C++ source code and written to a text file. The generated code then can be compiled with any C++ compiler (steps 8, 9).",
"Inside of ROOT, interface functions clad::differentiate and clad::gradient are accessible via include <Math/CladDerivator.h>. Clad is also directly integrated into the TFormula class that encapsulates the concept of multidimensional mathematical functions in ROOT. TFormula is a primitive in ROOT's math package which is connected to the Cling interpreter. In the context of TFormula, Clad can differentiate functions available in the interpreter. The TFormula::GenerateGradientPar method uses Clad to differentiate the underlying code of the formula with respect to its parameters and generate the code for the gradient. TFormula::GradientPar method then evaluates the gradient at a specified point."
],
[
"In this section, we empirically compare automatic differentiation (AD, our implementation based on Clad) and numerical differentiation (ND, based on finite difference method) in ROOT. We show that AD can drastically improve accuracy and performance of derivative evaluation, compared to ND."
],
[
"As stated in Section SECREF1, numerical differentiation may give imprecise results while AD computes the derivatives exactly. We show an example of a function where this difference is apparent: AD provides exact result while ND suffers from the loss of accuracy.",
"2",
"",
"The function is the PDF of Breit-Wigner distribution (Eq. DISPLAY_FORM19), whose derivative with respect to $\\Gamma $ (Eq. DISPLAY_FORM20) has critical points at $\\Gamma =\\pm {2x}$. In ROOT, the function is implemented as in (Listing SECREF18).",
"linenos=false inline double breitwignerpdf(double x, double gamma, double x0 = 0) double gammahalf = gamma/2.0; return gammahalf/(MPI * ((x-x0)*(x-x0) + gammahalf*gammahalf));",
"listingBreit-Wigner PDF implementation in ROOT",
"",
"When evaluating the derivative of breitwignerpdf with respect to gamma at x=1, gamma=2, ND in ROOT the yields a result close to 0 with an absolute error of $10^{-13}$ despite the fact that the function is smooth and well-conditioned at this point. The approximation error becomes larger when the derivative is evaluated further from the critical point. In contrast, the automatic differentiation (in both modes) yields the exact result of 0."
],
[
"Section SECREF2 showed that reverse mode AD computes gradients in a single pass with a runtime complexity of at most $4 \\cdot n$, which depends only on the complexity $n$ and not the dimensionality $dim$ of the original function. On the other hand, numerical differentiation requires a separate evaluation of the original function for every dimension to compute the entire gradient, making the overall the run-time complexity of gradient evaluation via central finite difference method $2 \\cdot dim \\cdot n$. Hence, in theory, reverse mode achieves an asymptotic speedup of $O(dim)$ over the numerical differentiation and can be up to $dim / 2$ times faster.",
"We experimentally verify this by comparing the performance of gradient evaluation produced by reverse mode AD against our an implementation of numerical differentiation via the central finite difference method. We use the two functions in Listing SECREF21: sum, which computes the sum of all values in a vector; and mvn, which implements the PDF of a multivariate normal distribution. Both functions have a parameter dim which defines the dimension, and gradients are taken with respect to dim-dimensional vector p. While closed-form expressions of these gradients are well-known, these functions make a good basis of a benchmark as they perform typical operations that are commonly found inside more complicated functions (e.g. +, *, pow, exp inside loop).",
"",
"linenos=false double sum(double* p, int dim) double r = 0.0; for (int i = 0; i < dim; i++) r += p[i]; return r; linenos=false double mvn(double* x, double* p /*means*/, double sigma, int dim) double t = 0; for (int i = 0; i < dim; i++) t += (x[i] - p[i])*(x[i] - p[i]); t = -t / (2*sigma*sigma); return std::pow(2*MPI, -n/2.0) * std::pow(sigma, -0.5) * std::exp(t); listingImplementations of sum and mvn functions",
"Gradients of sum produced by numerical differentiation and Clad are shown in Listing SECREF21.",
"",
"linenos=false double* sumnumgrad(double* p, int dim, double eps = 1e-8) double result = new double[dim]; for (int i = 0; i < dim; i++) double pi = p[i]; p[i] = pi + eps; double v1 = sum(p, dim); p[i] = pi - eps; double v2 = sum(p, dim); result[i] = (v1 - v2)/(2 * eps); p[i] = pi; return result;",
"linenos=false void sumadgrad(double *p, int dim, double *result) double dr = 0; unsigned long t0; int di = 0; clad::tape<int> t1 = ; double r = 0.; t0 = 0; for (int i = 0; i < dim; i++) t0++; r += p[clad::push(t1, i)]; double sumreturn = r; dr += 1; for (; t0; t0–) double rd0 = dr; dr += rd0; result[clad::pop(t1)] += rd0; dr -= rd0; listingGradient of sum: (left) using finite differences, (right) generated by Clad",
"We perform the evaluation for values of dim between 5 and 20480. Figure FIGREF22 shows the comparison for (a) sum; (b) mvn and confirms the expected theoretical speedup of $O(dim)$, with AD-generated gradient being $~dim/4$ times faster for sum and $~dim/25$ times faster for mvn (slowdown is due to more expensive operations like pow, exp).",
"",
""
],
[
"Figure FIGREF26 shows the performance comparisons of reverse-mode AD and ND for the task of evaluating gradients of TFormula's builtin primitive probability density functions. The functions are gaus ($dim=3$), expo ($dim=2$), crystalball ($dim=5$), breitwigner ($dim=5$) and cheb2 ($dim=4$). Despite the low dimensionality ($dim \\le 5$), AD gives significant (approx. 10x) speedups. The speedups are even larger than expected factor of $dim/2$ that follows from theoretical results, apparently due to additional overhead of the implementation of numerical differentiation in ROOT, which tries to find the optimal step size for its finite difference method to improve accuracy.",
"In Figure FIGREF26, we perform fitting of a Gaussian distribution to a histogram of random samples via gradient-based optimization. In ROOT, this functionality is implemented in TFormula-based TF1 class. We can therefore use AD due to the integration of Clad into TFormula. Figure FIGREF26 compares the performance of the AD-based TF1 fitting with the numerical fitting in the Hist package. As in previous experiments, we show that AD scales better with problem dimensionality (number of histogram bins) on this task. The integration of Clad into TFormula makes it straightforward to use AD for fitting in ROOT."
],
[
"We discussed our implementation of automatic differentiation in ROOT based on Clad. We demonstrated that Clad is integrated into ROOT and can be easily used in various contexts inside ROOT (e.g. histogram fitting). Furthermore, we showed that automatic differentiation in ROOT achieves significant improvements in accuracy and performance over numerical differentiation. The performance and accuracy are promising and encourage further work in the development of Clad and its integration in ROOT.",
"Possible further improvements for Clad include optimizations to code transformation and design of a consistent interface for derivatives and gradients computation. This functionality can be further extended, including the computation of Jacobians and higher-order derivatives. In order to achieve optimal performance, the evaluation of individual derivatives could be executed in parallel. Besides, the Clad API should enable a flexible execution method based on the needs of its user."
],
[
"This work has been supported by U.S. NSF grants PHY-1450377 and 1450323."
]
]
}
|
{
"question": [
"How is correctness of automatic derivation proved?",
"Is this AD implementation used in any deep learning framework?"
],
"question_id": [
"d353a6bbdc66be9298494d0c853e0d8d752dec4b",
"e2cfaa2ec89b944bbc46e5edf7753b3018dbdc8f"
],
"nlp_background": [
"zero",
"zero"
],
"topic_background": [
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no"
],
"search_query": [
"computer vision",
"computer vision"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"empirically compare automatic differentiation (AD, our implementation based on Clad) and numerical differentiation (ND, based on finite difference method)"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In this section, we empirically compare automatic differentiation (AD, our implementation based on Clad) and numerical differentiation (ND, based on finite difference method) in ROOT. We show that AD can drastically improve accuracy and performance of derivative evaluation, compared to ND."
],
"highlighted_evidence": [
"In this section, we empirically compare automatic differentiation (AD, our implementation based on Clad) and numerical differentiation (ND, based on finite difference method) in ROOT. We show that AD can drastically improve accuracy and performance of derivative evaluation, compared to ND."
]
}
],
"annotation_id": [
"4dd979c13a81b4917f659a7642001fc09afba8e2"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"0f01280a865518c283061e77aba517769dc8d464"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
}
|
{
"caption": [
"Figure 1: Information flow of Clad in ROOT",
"Figure 2: Comparison of reverse mode AD and ND with increasing dimension",
"Figure 3: Performance benchmarks in ROOT"
],
"file": [
"4-Figure1-1.png",
"6-Figure2-1.png",
"7-Figure3-1.png"
]
}
|
1910.10408
|
Controlling the Output Length of Neural Machine Translation
|
The recent advances introduced by neural machine translation (NMT) are rapidly expanding the application fields of machine translation, as well as reshaping the quality level to be targeted. In particular, if translations have to fit some given layout, quality should not only be measured in terms of adequacy and fluency, but also length. Exemplary cases are the translation of document files, subtitles, and scripts for dubbing, where the output length should ideally be as close as possible to the length of the input text. This paper addresses for the first time, to the best of our knowledge, the problem of controlling the output length in NMT. We investigate two methods for biasing the output length with a transformer architecture: i) conditioning the output to a given target-source length-ratio class and ii) enriching the transformer positional embedding with length information. Our experiments show that both methods can induce the network to generate shorter translations, as well as acquiring interpretable linguistic skills.
|
{
"section_name": [
"Introduction",
"Background",
"Background ::: Transformer",
"Background ::: Length encoding in summarization",
"Methods",
"Methods ::: Length Token Method",
"Methods ::: Length Encoding Method",
"Methods ::: Combining the two methods",
"Methods ::: Fine-Tuning for length control",
"Experiments ::: Data and Settings",
"Experiments ::: Models",
"Experiments ::: Evaluation",
"Results",
"Results ::: Small Data condition",
"Results ::: Large data condition",
"Results ::: Human Evaluation and Analysis",
"Related works",
"Conclusion"
],
"paragraphs": [
[
"The sequence to sequence BIBREF0, BIBREF1 approach to Neural Machine Translation (NMT) has shown to improve quality in various translation tasks BIBREF2, BIBREF3, BIBREF4. While translation quality is normally measured in terms of correct transfer of meaning and of fluency, there are several applications of NMT that would benefit from optimizing the output length, such as the translation of document elements that have to fit a given layout – e.g. entries of tables or bullet points of a presentation – or subtitles, which have to fit visual constraints and readability goals, as well as speech dubbing, for which the length of the translation should be as close as possible to the length of the original sentence.",
"Current NMT models do not model explicitly sentence lengths of input and output, and the decoding methods do not allow to specify desired number of tokens to be generated. Instead, they implicitly rely on the observed length of the training examples BIBREF5, BIBREF6.",
"Sequence-to-sequence models have been also applied to text summarization BIBREF7 to map the relevant information found in a long text into a limited-length summary. Such models have shown promising results by directly controlling the output length BIBREF8, BIBREF9, BIBREF10, BIBREF11. However, differently from MT, text summarization (besides being a monolingual task) is characterized by target sentences that are always much shorter than the corresponding source sentences. While in MT, the distribution of the relative lengths of source and target depends on the two languages and can significantly vary from one sentence pair to another due to stylistic decisions of the translator and linguistic constraints (e.g. idiomatic expressions).",
"In this work, we propose two approaches to control the output length of a transformer NMT model. In the first approach, we augment the source side with a token representing a specific length-ratio class, i.e. short, normal, and long, which at training time corresponds to the observed ratio and at inference time to the desired ratio. In the second approach, inspired by recent work in text summarization BIBREF11, we enrich the position encoding used by the transformer model with information representing the position of words with respect to the end of the target string.",
"We investigate both methods, either in isolation or combined, on two translation directions (En-It and En-De) for which the length of the target is on average longer than the length of the source. Our ultimate goal is to generate translations whose length is not longer than that of the source string (see example in Table FIGREF1). While generating translations that are just a few words shorter might appear as a simple task, it actually implies good control of the target language. As the reported examples show, the network has to implicitly apply strategies such as choosing shorter rephrasing, avoiding redundant adverbs and adjectives, using different verb tenses, etc. We report MT performance results under two training data conditions, small and large, which show limited degradation in BLEU score and n-gram precision as we vary the target length ratio of our models. We also run a manual evaluation which shows for the En-It task a slight quality degradation in exchange of a statistically significant reduction in the average length ratio, from 1.05 to 1.01."
],
[
"Our proposal is based on the transformer architecture and a recently proposed extension of its positional encoding aimed to control the length of generated sentences in text summarization."
],
[
"Transformer BIBREF12 is a sequence-to-sequence architecture that processes sequences using only attention and feed forward layers. Its core component is the so-called multi-head attention, which computes attention BIBREF0, BIBREF13 between two sequences in a multi-branch fashion BIBREF14. Within the encoder or the decoder, each layer first computes attention between two copies of the same sequence (self-attention). In the decoder, this step is followed by an attention over the encoder output sequence. The last step in each layer is a two-layered time-distributed feed-forward network, with a hidden size larger than its input and output. Attention and feed-forward layers are characterized by a position-invariant processing of their input. Thus, in order to enrich input embeddings in source and target with positional information, they are summed with positional vectors of the same dimension $d$, which are computed with the following trigonometric encoding ($\\text{PE}$):",
"for $i=1,\\ldots ,d/2$."
],
[
"Recently, an extension of the positional encoding BIBREF11 was proposed to model the output length for text summarization. The goal is achieved by computing the distance from every position to the end of the sentence. The new length encoding is present only in the decoder network as an additional vector summed to the input embedding. The authors proposed two different variants. The first variant replaces the variable pos in equations (1-2) with the difference $len - pos$, where len is the sentence length. The second variant attempts to model the proportion of the sentence that has been covered at a given position by replacing the constant 10000 in the denominator of equations (1-2) with $len$. As decoding is performed at the character level, len and pos are given in number of characters. At training time, len is the observed length of the reference summary, while at inference time it is the desired length."
],
[
"We propose two methods to control the output length in NMT. In the first method we partition the training set in three groups according to the observed length ratio of the reference over the source text. The idea is to let the model learn translation variants by observing them jointly with an extra input token. The second method extends the Transformer positional encoding to give information about the remaining sentence length. With this second method the network can leverage fine-grained information about the sentence length."
],
[
"Our first approach to control the length is inspired by target forcing in multilingual NMT BIBREF15, BIBREF16. We first split the training sentence pairs into three groups according to the target/source length ratio (in terms of characters). Ideally, we want a group where the target is shorter than the source (short), one where they are equally-sized (normal) and a last group where the target is longer than the source (long). In practice, we select two thresholds $t_\\text{min}$ and $t_\\text{max}$ according to the length ratio distribution. All the sentence pairs with length ratio between $t_\\text{min}$ and $t_\\text{max}$ are in the normal group, the ones with ratio below $t_\\text{min}$ in short and the remaining in long. At training time we prepend a length token to each source sentence according to its group ($<$short$>$, $<$normal$>$, or $<$long$>$), in order to let a single network to discriminate between the groups (see Figure FIGREF2). At inference time, the length token is used to bias the network to generate a translation that belongs to the desired length group."
],
[
"Inspired by BIBREF11, we use length encoding to provide the network with information about the remaining sentence length during decoding. We propose two types of length encoding: absolute and relative. Let pos and len be, respectively, a token position and the end of the sequence, both expressed in terms of number characters. Then, the absolute approach encodes the remaining length:",
"where $i=1,\\ldots ,d/2$.",
"Similarly, the relative difference encodes the relative position to the end. This representation is made consistent with the absolute encoding by quantizing the space of the relative positions into a finite set of $N$ integers:",
"where $q_N: [0, 1] \\rightarrow \\lbrace 0, 1, .., N\\rbrace $ is simply defined as $q_N(x) = \\lfloor {x \\times N}\\rfloor $. As we are interested in the character length of the target sequence, len and pos are given in terms of characters, but we represent the sequence as a sequence of BPE-segmented subwords BIBREF17. To solve the ambiguity, len is the character length of the sequence, while pos is the character count of all the preceding tokens. We prefer a representation based on BPE, unlike BIBREF11, as it leads to better translations with less training time BIBREF18, BIBREF19. During training, len is the observed length of the target sentence, while at inference time it is the length of the source sentence, as it is the length that we aim to match. The process is exemplified in Figure FIGREF9."
],
[
"We further propose to use the two methods together to combine their strengths. In fact, while the length token acts as a soft constraint to bias NMT to produce short or long translation with respect to the source, actually no length information is given to the network. On the other side, length encoding leverages information about the target length, but it is agnostic of the source length."
],
[
"Training an NMT model from scratch is a compute intensive and time consuming task. Alternatively, fine-tuning a pre-trained network shows to improve performance in several NMT scenarios BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24. For our length control approaches, we further propose to use fine-tuning an NMT model with length information, instead of training it from scratch. By adopting a fine-tuning strategy, we specifically aim; i) to decouple the performance of the baseline NMT model from that of the additional length information, ii) control the level of aggressiveness that can come from the data (length token) and the model (length encoding), and iii) make the approaches versatile to any pre-trained model. More importantly, it will allow to transform any NMT model to an output length aware version, while getting better improvements on the quality of the generated sequences."
],
[
"Our experiments are run using the English$\\rightarrow $Italian/German portions of the MuST-C corpus BIBREF25, which is extracted from TED talks, using the same train/validation/test split as provided with the corpus (see Table TABREF18). As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De). While our main goal is to verify our hypotheses on a large data condition, thus the need to include proprietary data, for the sake of reproducibility in both languages we also provide results with systems only trained on TED Talks (small data condition). When training on large scale data we use Transformer with layer size of 1024, hidden size of 4096 on feed forward layers, 16 heads in the multi-head attention, and 6 layers in both encoder and decoder. When training only on TED talks, we set layer size of 512, hidden size of 2048 for the feed forward layers, multi-head attention with 8 heads and again 6 layers in both encoder and decoder.",
"In all the experiments, we use the Adam BIBREF26 optimizer with an initial learning rate of $1\\times 10^{-7}$ that increases linearly up to $0.001$ for 4000 warm-up steps, and decreases afterwards with the inverse square root of the training step. The dropout is set to $0.3$ in all layers but the attention, where it is $0.1$. The models are trained with label smoothed cross-entropy with a smoothing factor of $0.1$. Training is performed on 8 Nvidia V100 GPUs, with batches of 4500 tokens per GPU. Gradients are accumulated for 16 batches in each GPU BIBREF27. We select the models for evaluation by applying early stopping based on the validation loss. All texts are tokenized with scripts from the Moses toolkit BIBREF28, and then words are segmented with BPE BIBREF17 with 32K joint merge rules.",
"For evaluation we take the best performing checkpoint on the dev set according to the loss. The size of the data clusters used for the length token method and their corresponding target-source length ratios are reported in Table TABREF19. The value of $N$ of the relative encoding is set to a small value (5), as in preliminary experiments we observed that a high value (100) produces results similar to the absolute encoding."
],
[
"We evaluate our Baseline Transformer using two decoding strategies: i) a standard beam search inference (standard), and ii) beam search with length penalty (penalty) set to $0.5$ to favor shorter translations BIBREF29.",
"Length token models are evaluated with three strategies that correspond to the tokens prepended to the source test set at a time (short, normal, and long), and reported as Len-Tok. Length encoding (Len-Enc) models are evaluated in a length matching condition, i.e. output length has to match input length. We report the relative (Rel) and absolute (Abs) strategies of the approach as discussed in Section SECREF10. In the small data condition, we additionally evaluated how the fine-tuning strategy compares with a model trained from scratch. In the large data condition, we added a setting that combines both the length-token and length-encoding strategies."
],
[
"To evaluate all models' performance we compute BLEU BIBREF30 with the multi-bleu.perl implementation on the single-reference test sets of the En-It and En-De pairs. Given the absence of multiple references covering different length ratios, we also report n-gram precision scores (BLEU$^*$), by multiplying the BLEU score by the inverse of the brevity penalty BIBREF30. BLEU$^*$ scores is meant to measure to what extent shorter translations are subset of longer translations.",
"The impact on translation lengths is evaluated with the mean sentence-level length ratios between MT output and source (LR$^{src}$) and between MT output and reference (LR$^{ref}$)."
],
[
"We performed experiments in two conditions: small data and larger data. In the small data condition we only use the MuST-C training set. In the large data condition, a baseline model is first trained on large data, then it is fine-tuned on the MuST-C training set using the proposed methods. Tables TABREF23 and TABREF26 lists the results for the small and large data conditions. For the two language directions they show BLEU and BLEU* scores, as well as the average length ratios."
],
[
"The baselines generate translations longer than the source sentence side, with a length ratio of 1.05 for Italian and 1.11 for German. Decoding with length penalty (penalty) slightly decreases the length ratios but they are still far from our goal of LR$^{src}$=1.00.",
"Fine-tuning. A comparison of the models trained from scratch (central portion of Table TABREF23) with their counterparts fine-tuned from the baseline (last portion of Table TABREF23) shows that the models in the first group generally generate shorter translations, but of worse quality. Additionally, the results with fine-tuning are not much different from the baseline. Existing models can be enhanced to produce shorter sentences, and little variation is observed in their translation quality.",
"Length tokens. Fine-tuning with Len-Tok (Fourth section in Table TABREF23) gives a coarse-grained control over the length, while keeping BLEU scores similar to the baseline or slightly better. Decoding with the token normal leads to translations slightly shorter than the baseline for En-It (LR$^{src}$=1.05 and LR$^{ref}$=1.02), while the token small strongly reduces the translation lengths up to almost the source length (LR$^{src}$=1.01). In the opposite side, the token long generates longer translations which are slightly worse than the others (32.00). A similar behavior is observed for En-De, where the LR$^{src}$ goes from 1.12 to 1.07 when changing normal with short, and to 1.15 with long. The results with the token long are not interesting for our task and are given only for the sake of completeness.",
"Length Encoding. The last section of Table TABREF23 lists the results of using length encoding (Len-Enc) relative (Rel) and absolute (Abs). The two encodings lead to different generated lengths, with Abs being always shorter than Rel. Unfortunately, these improvements in the lengths correspond to a significant degradation in translation quality, mostly due to truncated sentences."
],
[
"Our Baselines for the large data condition generate sentences with length ratios over the source comparable to the small data condition (LR$^\\text{src}$ and LR$^\\text{ref}$), but with better translation quality: 35.46 BLEU points for En-It and 33.96 for En-De. Length penalty slightly reduces the length ratios, which results in a 0.3 BLEU points improvement in Italian and -0.3 in German because of the brevity penalty. In the latter case, the BLEU* is slightly better than the standard baseline output. Also for the large data condition, while the length penalty slightly helps to shorten the translations, its effect is minimal and insufficient for our goal.",
"Length tokens. In En-It there is no noticeable difference in translation quality between the tokens normal and short, while there is a degradation of $\\sim 0.7$ points when using long. This last result is consistent with the ones observed before. Also in this case the token short does not degrade the BLEU score, and obtains the highest precision BLEU* with 36.22. In En-De we obtain the best results with token normal (34.46), which matches the length distribution of the references. The token short generates much shorter outputs (LR$^\\text{src}$=1.05), which are also much shorter than the reference (LR$^\\text{ref}=0.93$). Consequently the BLEU score degrades significantly (30.61), and also the BLEU* is 1 point lower than with the token normal. Longer translations can be generated with the token long, but they always come at the expense of lower quality.",
"Length encoding. For En-It, Len-Enc Rel in Table TABREF26 achieves a LR$^\\text{src}$ of 1.01 with a slight degradation of $0.3$ BLEU points over the baseline, while in the case of Abs the degradation is higher (-1.6) and LR$^\\text{src}$ is similar (1.02). Also in En-De the degradation of Rel over the baseline is only -0.3, but the reduction in terms of LR$^\\text{src}$ is very small (1.11 vs 1.13). On the other side, Abs produces much shorter translations (1.03 LR$^\\text{src}$) at the expense of a significantly lower BLEU score (30.79). When computing the BLEU* score, the absolute encoding is only 0.45 points lower than the relative encoding (33.29 vs 33.74), but -0.8 lower than the baseline.",
"Token + Encoding. So far, we have observed generally good results using the token method and translating with the tokens short and normal. while the length encoding generally produces a more predictable output length, in particular for the absolute variant. In the last experiment, we combine the two methods in order to have a system that can capture different styles (short, normal, long), as well as explicitly leveraging length information. The results listed in the last portion of Table TABREF26 (Tok+Enc) show that the relative encoding Rel produces better translations than Abs, but again it has less predictability in output length. For instance, in En-It the LR$^\\text{src}$ of Rel is 0.96 with token short and 1.02 with normal, while for En-De it is 1.01 with short and 1.08 with normal. On the other side, the Abs produces LR$^\\text{src}$ of 1.01 with both tokens in En-It and also with short in En-De, and it increases to only 1.03 with normal.",
"Controlling output length. In order to achieve LR$^\\text{src}$ as close as possible to 1.0, we set the target length during generation equal to the source length when using the length encoding methods. However, one advantage of length encoding is the possibility to set the target length to modify the average output length. We illustrate this option by using the Tok+Enc Rel system for En-It, and translating with the tokens normal or short and different scaling factors for the target length. The results, listed in Table TABREF27, show that we are able to approach an LR$^{src}$ of 1.0 with both tokens and the BLEU score is not affected with token normal (35.45) or improves with token short (35.11).",
"Discussion. Length token is an effective approach to generate translations of different lengths, but it does not allow a fine-grained control of the output lengths and its results depend on the partition of the training set into groups, which is a manual process. Length encoding allows to change the output length, but the two variants have different effects. Absolute encoding is more accurate but generates sentences with missing information. The relative encoding produces better translations than the absolute encoding, but its control over the translation length is more loose. The increased length stability is captured by the standard deviation of the length ratio with the source, which is $0.14$ for length tokens, $\\sim 0.11$ for relative encoding and $\\sim 0.07$ for absolute encoding. The advantage of the combined approach is that it can generate sentences with different style to fit different length groups, and the output length can also be tuned by modifying the target length, while no important quality degradation is observed. Additionally, the standard deviation of the lengths is the same as for the length encoding used."
],
[
"After manually inspecting the outputs of the best performing models under the large data condition, we decided to run a human evaluation only for the En-It Len-Tok model. As our ultimate goal is to be able to generate shorter translations and as close as possible to the length of the source sentences, we focused the manual evaluation on the Short output class and aimed to verify possible losses in quality with respect to the baseline system. We ran a head-to-head evaluation on the first 10 sentences of each test talk, for a total of 270 sentences, by asking annotators to blindly rank the two system outputs (ties were also permitted) in terms of quality with respect to a reference translation. We collected three judgments for each output, from 19 annotators, for a total of 807 scores (one sentence had to be discarded). Inter-annotator agreement measured with Fleiss' kappa was 0.35 (= fair agreement). Results reported in Table TABREF32 confirm the small differences observed in BLEU scores: there are only a 4% more wins for the Baseline and almost 60% of ties. The small degradation in quality of the shorter translations is statistically significant ($p<0.05$), as well as their difference in length ($p<0.001$).",
"Notice that the evaluation was quite severe towards the shorter translations, as even small changes of the meaning could affect the ranking. After the manual evaluation, we analyzed sentences in which shorter translations were unanimously judged equal or better than the standard translations. We hence tried to identify the linguistic skills involved in the generation of shorter translations, namely: (i) use of abbreviations, (ii) preference of simple verb tenses over compound tenses, (iii) avoidance of not relevant adjective, adverbs, pronouns and articles, (iv) use of paraphrases. Table TABREF33 shows examples of the application of the above strategies as found in the test set."
],
[
"As an integration of Section 2, we try to provide a more complete picture on previous work with seq-to-seq models to control the output length for text summarization, and on the use of tokens to bias in different ways the output of NMT.",
"In text summarization, BIBREF8 proposed methods to control output length either by modifying the search process or the seq-to-seq model itself, showing that the latter being more promising. BIBREF9 addressed the problem similarly to our token approach, by training the model on data bins of homogeneous output length and conditioning the output on a length token. They reported better performance than BIBREF8. Finally, BIBREF11 proposed the extension of the positional encoding of the transformer (cf. Section 2), reporting better performance than BIBREF8 and BIBREF9.",
"The use of tokens to condition the output of NMT started with the multilingual models BIBREF15, BIBREF16, and was then further applied to control the use of the politeness form in English-German NMT BIBREF32, in the translation from English into different varieties of the same language BIBREF33, for personalizing NMT to user gender and vocabulary BIBREF34, and finally to perform NMT across different translation styles BIBREF35."
],
[
"In this paper, we have proposed two solutions for the problem of controlling the output length of NMT. A first approach, inspired by multilingual NMT, allows a coarse-grained control over the length and no degradation in translation quality. A second approach, inspired by positional encoding, enables a fine-grained control with only a small error in the token count, but at the cost of a lower translation quality. A manual evaluation confirms the translation quality observed with BLEU score. In future work, we plan to design more flexible and context-aware evaluations which allow us to account for short translations that are not equivalent to the original but at the same time do not affect the overall meaning of the discourse."
]
]
}
|
{
"question": [
"Do they conduct any human evaluation?",
"What dataset do they use for experiments?",
"How do they enrich the positional embedding with length information",
"How do they condition the output to a given target-source class?",
"Which languages do they focus on?",
"What dataset do they use?",
"Do they experiment with combining both methods?"
],
"question_id": [
"22c36082b00f677e054f0f0395ed685808965a02",
"85a7dbf6c2e21bfb7a3a938381890ac0ec2a19e0",
"90bc60320584ebba11af980ed92a309f0c1b5507",
"f52b2ca49d98a37a6949288ec5f281a3217e5ae8",
"228425783a4830e576fb98696f76f4c7c0a1b906",
"9d1135303212356f3420ed010dcbe58203cc7db4",
"d8bf4a29c7af213a9a176eb1503ec97d01cc8f51"
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity",
"two",
"two",
"two"
],
"topic_background": [
"",
"",
"",
"",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"",
"",
"",
"",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
"",
"",
"",
""
],
"question_writer": [
"798ee385d7c8105b83b032c7acc2347588e09d61",
"798ee385d7c8105b83b032c7acc2347588e09d61",
"798ee385d7c8105b83b032c7acc2347588e09d61",
"798ee385d7c8105b83b032c7acc2347588e09d61",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"We investigate both methods, either in isolation or combined, on two translation directions (En-It and En-De) for which the length of the target is on average longer than the length of the source. Our ultimate goal is to generate translations whose length is not longer than that of the source string (see example in Table FIGREF1). While generating translations that are just a few words shorter might appear as a simple task, it actually implies good control of the target language. As the reported examples show, the network has to implicitly apply strategies such as choosing shorter rephrasing, avoiding redundant adverbs and adjectives, using different verb tenses, etc. We report MT performance results under two training data conditions, small and large, which show limited degradation in BLEU score and n-gram precision as we vary the target length ratio of our models. We also run a manual evaluation which shows for the En-It task a slight quality degradation in exchange of a statistically significant reduction in the average length ratio, from 1.05 to 1.01."
],
"highlighted_evidence": [
"We also run a manual evaluation which shows for the En-It task a slight quality degradation in exchange of a statistically significant reduction in the average length ratio, from 1.05 to 1.01."
]
}
],
"annotation_id": [
"0f04331cbdb88dc33e06b6b970c11db7cc4e842d"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"English$\\rightarrow $Italian/German portions of the MuST-C corpus",
"As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De)"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Our experiments are run using the English$\\rightarrow $Italian/German portions of the MuST-C corpus BIBREF25, which is extracted from TED talks, using the same train/validation/test split as provided with the corpus (see Table TABREF18). As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De). While our main goal is to verify our hypotheses on a large data condition, thus the need to include proprietary data, for the sake of reproducibility in both languages we also provide results with systems only trained on TED Talks (small data condition). When training on large scale data we use Transformer with layer size of 1024, hidden size of 4096 on feed forward layers, 16 heads in the multi-head attention, and 6 layers in both encoder and decoder. When training only on TED talks, we set layer size of 512, hidden size of 2048 for the feed forward layers, multi-head attention with 8 heads and again 6 layers in both encoder and decoder."
],
"highlighted_evidence": [
"Our experiments are run using the English$\\rightarrow $Italian/German portions of the MuST-C corpus BIBREF25, which is extracted from TED talks, using the same train/validation/test split as provided with the corpus (see Table TABREF18). As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De)."
]
}
],
"annotation_id": [
"d897b5cc9f257c8fd1a930a6bc1b7e1d73005efb"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "They introduce new trigonometric encoding which besides information about position uses additional length information (abs or relative).",
"evidence": [
"Methods ::: Length Encoding Method",
"Inspired by BIBREF11, we use length encoding to provide the network with information about the remaining sentence length during decoding. We propose two types of length encoding: absolute and relative. Let pos and len be, respectively, a token position and the end of the sequence, both expressed in terms of number characters. Then, the absolute approach encodes the remaining length:",
"where $i=1,\\ldots ,d/2$.",
"Similarly, the relative difference encodes the relative position to the end. This representation is made consistent with the absolute encoding by quantizing the space of the relative positions into a finite set of $N$ integers:",
"where $q_N: [0, 1] \\rightarrow \\lbrace 0, 1, .., N\\rbrace $ is simply defined as $q_N(x) = \\lfloor {x \\times N}\\rfloor $. As we are interested in the character length of the target sequence, len and pos are given in terms of characters, but we represent the sequence as a sequence of BPE-segmented subwords BIBREF17. To solve the ambiguity, len is the character length of the sequence, while pos is the character count of all the preceding tokens. We prefer a representation based on BPE, unlike BIBREF11, as it leads to better translations with less training time BIBREF18, BIBREF19. During training, len is the observed length of the target sentence, while at inference time it is the length of the source sentence, as it is the length that we aim to match. The process is exemplified in Figure FIGREF9."
],
"highlighted_evidence": [
"Methods ::: Length Encoding Method\nInspired by BIBREF11, we use length encoding to provide the network with information about the remaining sentence length during decoding. We propose two types of length encoding: absolute and relative. Let pos and len be, respectively, a token position and the end of the sequence, both expressed in terms of number characters. Then, the absolute approach encodes the remaining length:\n\nwhere $i=1,\\ldots ,d/2$.\n\nSimilarly, the relative difference encodes the relative position to the end. This representation is made consistent with the absolute encoding by quantizing the space of the relative positions into a finite set of $N$ integers:\n\nwhere $q_N: [0, 1] \\rightarrow \\lbrace 0, 1, .., N\\rbrace $ is simply defined as $q_N(x) = \\lfloor {x \\times N}\\rfloor $. As we are interested in the character length of the target sequence, len and pos are given in terms of characters, but we represent the sequence as a sequence of BPE-segmented subwords BIBREF17. To solve the ambiguity, len is the character length of the sequence, while pos is the character count of all the preceding tokens."
]
}
],
"annotation_id": [
"6c4be2329714531078bea6390c6892868f51944e"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "They use three groups short/normal/long translation classes to learn length token, which is in inference used to bias network to generate desired length group.",
"evidence": [
"Methods ::: Length Token Method",
"Our first approach to control the length is inspired by target forcing in multilingual NMT BIBREF15, BIBREF16. We first split the training sentence pairs into three groups according to the target/source length ratio (in terms of characters). Ideally, we want a group where the target is shorter than the source (short), one where they are equally-sized (normal) and a last group where the target is longer than the source (long). In practice, we select two thresholds $t_\\text{min}$ and $t_\\text{max}$ according to the length ratio distribution. All the sentence pairs with length ratio between $t_\\text{min}$ and $t_\\text{max}$ are in the normal group, the ones with ratio below $t_\\text{min}$ in short and the remaining in long. At training time we prepend a length token to each source sentence according to its group ($<$short$>$, $<$normal$>$, or $<$long$>$), in order to let a single network to discriminate between the groups (see Figure FIGREF2). At inference time, the length token is used to bias the network to generate a translation that belongs to the desired length group."
],
"highlighted_evidence": [
"Methods ::: Length Token Method\nOur first approach to control the length is inspired by target forcing in multilingual NMT BIBREF15, BIBREF16. We first split the training sentence pairs into three groups according to the target/source length ratio (in terms of characters). Ideally, we want a group where the target is shorter than the source (short), one where they are equally-sized (normal) and a last group where the target is longer than the source (long). In practice, we select two thresholds $t_\\text{min}$ and $t_\\text{max}$ according to the length ratio distribution. All the sentence pairs with length ratio between $t_\\text{min}$ and $t_\\text{max}$ are in the normal group, the ones with ratio below $t_\\text{min}$ in short and the remaining in long. At training time we prepend a length token to each source sentence according to its group ($<$short$>$, $<$normal$>$, or $<$long$>$), in order to let a single network to discriminate between the groups (see Figure FIGREF2). At inference time, the length token is used to bias the network to generate a translation that belongs to the desired length group."
]
}
],
"annotation_id": [
"f51792ec82eea4ff8587745ac8140a8357572bed"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"two translation directions (En-It and En-De)"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We investigate both methods, either in isolation or combined, on two translation directions (En-It and En-De) for which the length of the target is on average longer than the length of the source. Our ultimate goal is to generate translations whose length is not longer than that of the source string (see example in Table FIGREF1). While generating translations that are just a few words shorter might appear as a simple task, it actually implies good control of the target language. As the reported examples show, the network has to implicitly apply strategies such as choosing shorter rephrasing, avoiding redundant adverbs and adjectives, using different verb tenses, etc. We report MT performance results under two training data conditions, small and large, which show limited degradation in BLEU score and n-gram precision as we vary the target length ratio of our models. We also run a manual evaluation which shows for the En-It task a slight quality degradation in exchange of a statistically significant reduction in the average length ratio, from 1.05 to 1.01."
],
"highlighted_evidence": [
"We investigate both methods, either in isolation or combined, on two translation directions (En-It and En-De) for which the length of the target is on average longer than the length of the source.",
"En-It, En-De in both directions"
]
}
],
"annotation_id": [
"498073e28e7f3074adbd65f4b3680a421b721175"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"English$\\rightarrow $Italian/German portions of the MuST-C corpus",
"As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De)"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Our experiments are run using the English$\\rightarrow $Italian/German portions of the MuST-C corpus BIBREF25, which is extracted from TED talks, using the same train/validation/test split as provided with the corpus (see Table TABREF18). As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De). While our main goal is to verify our hypotheses on a large data condition, thus the need to include proprietary data, for the sake of reproducibility in both languages we also provide results with systems only trained on TED Talks (small data condition). When training on large scale data we use Transformer with layer size of 1024, hidden size of 4096 on feed forward layers, 16 heads in the multi-head attention, and 6 layers in both encoder and decoder. When training only on TED talks, we set layer size of 512, hidden size of 2048 for the feed forward layers, multi-head attention with 8 heads and again 6 layers in both encoder and decoder."
],
"highlighted_evidence": [
"Our experiments are run using the English$\\rightarrow $Italian/German portions of the MuST-C corpus BIBREF25, which is extracted from TED talks, using the same train/validation/test split as provided with the corpus (see Table TABREF18). As additional data, we use a mix of public and proprietary data for about 16 million sentence pairs for English-Italian (En-It) and $4.4$ million WMT14 sentence pairs for the English-German (En-De)."
]
}
],
"annotation_id": [
"6bfc48103d84dc0223b89994e5583504b0fb8bf8"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"Methods ::: Combining the two methods",
"We further propose to use the two methods together to combine their strengths. In fact, while the length token acts as a soft constraint to bias NMT to produce short or long translation with respect to the source, actually no length information is given to the network. On the other side, length encoding leverages information about the target length, but it is agnostic of the source length."
],
"highlighted_evidence": [
"Methods ::: Combining the two methods\nWe further propose to use the two methods together to combine their strengths. In fact, while the length token acts as a soft constraint to bias NMT to produce short or long translation with respect to the source, actually no length information is given to the network. On the other side, length encoding leverages information about the target length, but it is agnostic of the source length."
]
}
],
"annotation_id": [
"223910aa36816d4bd67012d8c487b2f175bfea2e"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
}
|
{
"caption": [
"Figure 1: German and Italian human and machine translations (MT) are usually longer than their English source (SRC). We investigate enhanced NMT (MT*) that can also generate translations shorter than the source length. Text in red exceeds the length of the source, while underlined words point out the different translation strategy of the enhanced NMT model.",
"Figure 2: Training NMT with three length ratio classes permits to get outputs of different length at inference time.",
"Figure 3: Transformer architecture with decoder input enriched with (relative) length embedding computed according to the desired target string length (12 characters in the example).",
"Table 1: Train, validation and test data size in number of examples.",
"Table 2: Train data category after assigning the length tokens (normal, short and long).",
"Table 3: Performance of the baseline and models with length information trained from scratch and or by fine-tuning, in terms of BLEU, BLEU∗, mean length ratio of the output against the source (LRsrc) and the reference (LRref ). italics shows the best performing model under each category, while bold shows the wining strategy.",
"Table 4: Large scale experiments comparing the baseline, length token, length encoding and their combination.",
"Table 5: Results for En-It with Tok+Enc Rel by scaling the target length with different constant factors.",
"Table 6: Manual evaluation on En-It (large data) ranking translation quality of the baseline (standard) and token short translation against the reference translation.",
"Table 7: Examples of shorter translation fragments obtained by paraphrasing (italics), drop of words (red), and change of verb tense (underline)."
],
"file": [
"2-Figure1-1.png",
"2-Figure2-1.png",
"4-Figure3-1.png",
"4-Table1-1.png",
"4-Table2-1.png",
"5-Table3-1.png",
"6-Table4-1.png",
"6-Table5-1.png",
"7-Table6-1.png",
"7-Table7-1.png"
]
}
|
1606.05286
|
Spectral decomposition method of dialog state tracking via collective matrix factorization
|
The task of dialog management is commonly decomposed into two sequential subtasks: dialog state tracking and dialog policy learning. In an end-to-end dialog system, the aim of dialog state tracking is to accurately estimate the true dialog state from noisy observations produced by the speech recognition and the natural language understanding modules. The state tracking task is primarily meant to support a dialog policy. From a probabilistic perspective, this is achieved by maintaining a posterior distribution over hidden dialog states composed of a set of context dependent variables. Once a dialog policy is learned, it strives to select an optimal dialog act given the estimated dialog state and a defined reward function. This paper introduces a novel method of dialog state tracking based on a bilinear algebric decomposition model that provides an efficient inference schema through collective matrix factorization. We evaluate the proposed approach on the second Dialog State Tracking Challenge (DSTC-2) dataset and we show that the proposed tracker gives encouraging results compared to the state-of-the-art trackers that participated in this standard benchmark. Finally, we show that the prediction schema is computationally efficient in comparison to the previous approaches.
|
{
"section_name": [
"Introduction",
"Transactional dialog state tracking",
"Generative Dialog State Tracking",
"Discriminative Dialog State Tracking",
"Spectral decomposition model for state tracking in slot-filling dialogs",
"Learning method",
"Prediction method",
"Experimental settings and Evaluation",
"Restaurant information domain",
"Experimental results",
"Related work",
"Conclusion"
],
"paragraphs": [
[
"The field of autonomous dialog systems is rapidly growing with the spread of smart mobile devices but it still faces many challenges to become the primary user interface for natural interaction through conversations. Indeed, when dialogs are conducted in noisy environments or when utterances themselves are noisy, correctly recognizing and understanding user utterances presents a real challenge. In the context of call-centers, efficient automation has the potential to boost productivity through increasing the probability of a call's success while reducing the overall cost of handling the call. One of the core components of a state-of-the-art dialog system is a dialog state tracker. Its purpose is to monitor the progress of a dialog and provide a compact representation of past user inputs and system outputs represented as a dialog state. The dialog state encapsulates the information needed to successfully finish the dialog, such as users' goals or requests. Indeed, the term “dialog state” loosely denotes an encapsulation of user needs at any point in a dialog. Obviously, the precise definition of the state depends on the associated dialog task. An effective dialog system must include a tracking mechanism which is able to accurately accumulate evidence over the sequence of turns of a dialog, and it must adjust the dialog state according to its observations. In that sense, it is an essential componant of a dialog systems. However, actual user utterances and corresponding intentions are not directly observable due to errors from Automatic Speech Recognition (ASR) and Natural Language Understanding (NLU), making it difficult to infer the true dialog state at any time of a dialog. A common method of modeling a dialog state is through the use of a slot-filling schema, as reviewed in BIBREF0 . In slot-filling, the state is composed of a predefined set of variables with a predefined domain of expression for each of them. The goal of the dialog system is to efficiently instantiate each of these variables thereby performing an associated task and satisfying the corresponding intent of the user.",
"Various approaches have been proposed to define dialog state trackers. The traditional methods used in most commercial implementations use hand-crafted rules that typically rely on the most likely result from an NLU module as described in BIBREF1 . However, these rule-based systems are prone to frequent errors as the most likely result is not always the correct one. Moreover, these systems often force the human customer to respond using simple keywords and to explicitly confirm everything they say, creating an experience that diverges considerably from the natural conversational interaction one might hope to achieve as recalled in BIBREF2 . More recent methods employ statistical approaches to estimate the posterior distribution over the dialog states allowing them to represent the uncertainty of the results of the NLU module. Statistical dialog state trackers are commonly categorized into one of two approaches according to how the posterior probability distribution over the state calculation is defined. In the first type, the generative approach uses a generative model of the dialog dynamic that describes how the sequence of utterances are generated by using the hidden dialog state and using Bayes' rule to calculate the posterior distribution of the state. It has been a popular approach for statistical dialog state tracking, since it naturally fits into the Partially Observable Markov Decision Process (POMDP) models as described in BIBREF3 , which is an integrated model for dialog state tracking and dialog strategy optimization. Using this generic formalism of sequential decision processes, the task of dialog state tracking is to calculate the posterior distribution over an hidden state given an history of observations. In the second type, the discriminative approach models the posterior distribution directly through a closed algebraic formulation as a loss minimization problem. Statistical dialog systems, in maintaining a distribution over multiple hypotheses of the true dialog state, are able to behave robustly even in the face of noisy conditions and ambiguity. In this paper, a statistical type of approach of state tracking is proposed by leveraging the recent progress of spectral decomposition methods formalized as bilinear algebraic decomposition and associated inference procedures. The proposed model estimates each state transition with respect to a set of observations and is able to compute the state transition through an inference procedure with a linear complexity with respect to the number of variables and observations.",
"Roadmap: This paper is structured as follows, Section \"Generative Dialog State Tracking\" formally defines transactional dialogs and describes the associated problem of statistical dialog state tracking with both the generative and discriminative approaches. Section \"Spectral decomposition model for state tracking in slot-filling dialogs\" depicts the proposed decompositional model for coupled and temporal hidden variable models and the associated inference procedure based on Collective Matrix Factorization (CMF). Finally, Section \"Experimental settings and Evaluation\" illustrates the approach with experimental results obtained using a state of the art benchmark for dialog state tracking."
],
[
"The dialog state tracking task we consider in this paper is formalized as follows: at each turn of a task-oriented dialog between a dialog system and a user, the dialog system chooses a dialog act $d$ to express and the user answers with an utterance $u$ . The dialog state at each turn of a given dialog is defined as a distribution over a set of predefined variables, which define the structure of the state as mentioned in BIBREF4 . This classic state structure is commonly called slot filling and the associated dialogs are commonly referred to as transactional. Indeed, in this context, the state tracking task consists of estimating the value of a set of predefined variables in order to perform a procedure or transaction which is, in fact, the purpose of the dialog. Typically, the NLU module processes the user utterance and generates an N-best list $o = \\lbrace <d_1, f_1>, \\ldots , <d_n, f_n>\\rbrace $ , where $d_i$ is the hypothesized user dialog act and $f_i$ is its confidence score. In the simplest case where no ASR and NLU modules are employed, as in a text based dialog system as proposed in BIBREF5 the utterance is taken as the observation using a so-called bag of words representation. If an NLU module is available, standardized dialog act schemas can be considered as observations as in BIBREF6 . Furthermore, if prosodic information is available by the ASR component of the dialog system as in BIBREF7 , it can also be considered as part of the observation definition. A statistical dialog state tracker maintains, at each discrete time step $t$ , the probability distribution over states, $b(s_t)$ , which is the system's belief over the state. The general process of slot-filling, transactional dialog management is summarized in Figure 1 . First, intent detection is typically an NLU problem consisting of identifying the task the user wants the system to accomplish. This first step determines the set of variables to instantiate during the second step, which is the slot-filling process. This type of dialog management assumes that a set of variables are required for each predefined intention. The slot filling process is a classic task of dialog management and is composed of the cyclic tasks of information gathering and integration, in other words – dialog state tracking. Finally, once all the variables have been correctly instantiated, a common practice in dialog systems is to perform a last general confirmation of the task desired by the user before finally executing the requested task. As an example used as illutration of the proposed method in this paper, in the case of the DSTC-2 challenge, presented in BIBREF8 , the context was taken from the restaurant information domain and the considered variables to instanciate as part of the state are {Area (5 possible values) ; FOOD (91 possible values) ; Name (113 possible values) ; Pricerange (3 possible values)}. In such framework, the purpose is to estimate as early as possible in the course of a given dialog the correct instantiation of each variable. In the following, we will assume the state is represented as a concatenation of zero-one encoding of the values for each variable defining the state. Furthermore, in the context of this paper, only the bag of words has been considered as an observation at a given turn but dialog acts or detected named entity provided by an SLU module could have also been incorporated as evidence.",
"Two statistical approaches have been considered for maintaining the distribution over a state given sequential NLU output. First, the discriminative approach aims to model the posterior probability distribution of the state at time $t+1$ with regard to state at time $t$ and observations $z_{1:t}$ . Second, the generative approach attempts to model the transition probability and the observation probability in order to exploit possible interdependencies between hidden variables that comprise the dialog state."
],
[
"A generative approach to dialog state tracking computes the belief over the state using Bayes' rule, using the belief from the last turn $b(s_{t-1})$ as a prior and the likelihood given the user utterance hypotheses $p(z_t|s_t)$ , with $z_t$ the observation gathered at time $t$ . In the prior work BIBREF4 , the likelihood is factored and some independence assumptions are made: ",
"$$b_t \\propto \\sum _{s_{t-1},z_t} p(s_t|z_t, d_{t-1}, s_{t-1}) p(z_t|s_t) b(s_{t-1})$$ (Eq. 3) ",
"Figure 2 depicts a typical generative model of a dialog state tracking process using a factorial hidden Markov model proposed by BIBREF9 . The shaded variables are the observed dialog turns and each unshaded variable represents a single variable describing the task dependent variables. In this family of approaches, scalability is considered as one of the main issues. One way to reduce the amount of computation is to group the states into partitions, as proposed in the Hidden Information State (HIS) model of BIBREF10 . Other approaches to cope with the scalability problem in dialog state tracking is to adopt a factored dynamic Bayesian network by making conditional independence assumptions among dialog state components, and then using approximate inference algorithms such as loopy belief propagation as proposed in BIBREF11 or a blocked Gibbs sampling as in BIBREF12 . To cope with such limitations, discriminative methods of state tracking presented in the next part of this section aim at directly model the posterior distribution of the tracked state using a choosen parametric form."
],
[
"The discriminative approach of dialog state tracking computes the belief over a state via a trained parametric model that directly represents the belief $b(s_{t+1}) = p(s_{s+1} | s_t, z_t)$ . Maximum Entropy has been widely used in the discriminative approach as described in BIBREF13 . It formulates the belief as follows: ",
"$$b(s) = P(s|x) = \\eta .e^{w^T\\phi (x,s)}$$ (Eq. 6) ",
"where $\\eta $ is the normalizing constant, $x = (d^u_1, d^m_1, s_1, \\dots , d^u_t, d^m_t, s_t)$ is the history of user dialog acts, $d^u_i, i \\in \\lbrace 1,\\ldots ,t\\rbrace $ , the system dialog acts, $d^m_i, i \\in \\lbrace 1,\\ldots ,t\\rbrace $ , and the sequence of states leading to the current dialog turn at time $t$ . Then, $\\phi (.)$ is a vector of feature functions on $x$ and $s$ , and finally, $w$ is the set of model parameters to be learned from annotated dialog data. According to the formulation, the posterior computation has to be carried out for all possible state realizations in order to obtain the normalizing constant $\\eta $ . This is not feasible for real dialog domains, which can have a large number of variables and possible variable instantiations. So, it is vital to the discriminative approach to reduce the size of the state space. For example, BIBREF13 proposes to restrict the set of possible state variables to those that appeared in NLU results. More recently, BIBREF14 assumes conditional independence between dialog state variables to address scalability issues and uses a conditional random field to track each variable separately. Finally, deep neural models, performing on a sliding window of features extracted from previous user turns, have also been proposed in BIBREF15 . Of the current literature, this family of approaches have proven to be the most efficient for publicly available state tracking datasets. In the next section, we present a decompositional approach of dialog state tracking that aims at reconciling the two main approaches of the state of the art while leveraging on the current advances of low-rank bilinear decomposition models, as recalled in BIBREF16 , that seems particularly adapted to the sparse nature of dialog state tracking tasks."
],
[
"In this section, the proposed model is presented and the learning and prediction procedures are detailed. The general idea consists in the decomposition of a matrix $M$ , composed of a set of turn's transition as rows and sparse encoding of the corresponding feature variables as columns. More precisely, a row of $M$ is composed with the concatenation of the sparse representation of (1) $s_{t}$ , a state at time $t$ (2) $s_{t+1}$ , a state at time $t+1$ (3) $z_t$ , a set of feature representating the observation. In the considered context, the bag of words composing the current turn is chosen as the observation. The parameter learning procedure is formalized as a matrix decomposition task solved through Alternating Least Square Ridge regression. The ridge regression task allows for an asymmetric penalization of the targeted variables of the state tracking task to perform. Figure 3 illustrates the collective matrix factorization task that constitutes the learning procedure of the state tracking model. The model introduces the component of the decomposed matrix to the form of latent variables $\\lbrace A, B, C\\rbrace $ , also called embeddings. In the next section, the learning procedure from dialog state transition data and the proper tracking algorithm are described. In other terms, each row of the matrix corresponds to the concatenation of a \"one-hot\" representation of a state description at time $t$ and a dialog turn at time $t$ and each column of the overall matrix $M$0 corresponds to a consider feature respectively of the state and dialog turn. Such type of modelization of the state tracking problem presents several advantages. First, the model is particularly flexible, the definition of the state and observation spaces are independent of the learning and prediction models and can be adapted to the context of tracking. Second, a bias by data can be applied in order to condition the transition model w.r.t separated matrices to decompose jointly as often proposed in multi-task learning as described in BIBREF17 and collective matrix factorization as detailed in BIBREF18 . Finally, the decomposition method is fast and parallelizable because it mainly leverages on core methods of linear algebra. From our knowledge, this proposition is the first attend to formalize and solve the state tracking task using a matrix decomposition approach."
],
[
"For the sake of simplicity, the $\\lbrace B,C\\rbrace $ matrices are concatenated to $E$ , and $M$ is the concatenation of the matrices $\\lbrace S_t,S_{t+1},Z_t\\rbrace $ depicted in Figure 3 . Equation 9 defines the optimization task, i.e. the loss function, associated with the learning problem of latent variable search $\\lbrace A,E\\rbrace $ . ",
"$$\\min _{A,E} || (M - AE ) W||_2^2 + \\lambda _a ||A||_2^2 + \\lambda _b ||E||_2^2\n\\hspace{5.0pt},$$ (Eq. 9) ",
"where $\\lbrace \\lambda _a, \\lambda _b\\rbrace \\in \\mathbb {R}^2$ are regularization hyper-parameters and $W$ is a diagonal matrix that increases the weight of the state variables, $s_{t+1}$ in order bias the resulting parameters $\\lbrace A,E\\rbrace $ toward better predictive accuracy on these specific variables. This type of weighting approach has been shown to be as efficient in comparable generative discriminative trade-off tasks as mentioned in BIBREF19 and BIBREF20 . An Alternating Least Squares method that is a sequence of two convex optimization problems is used in order to perform the minimization task. First, for known $E$ , compute: ",
"$$A^* = \\operatornamewithlimits{arg\\,min}_{A} || (M - AE ) W ||_2^2 + \\lambda _a ||A||_2^2\n\\hspace{5.0pt},$$ (Eq. 10) ",
"then for a given $A$ , ",
"$$E^* = \\operatornamewithlimits{arg\\,min}_{E} || (M - AE) W ||_2^2 + \\lambda _b ||E||_2^2$$ (Eq. 11) ",
"By iteratively solving these two optimization problems, we obtain the following fixed-point regularized and weighted alternating least square algorithms where $t$ correspond to the current step of the overall iterative process: ",
"$$A_{t+1} \\leftarrow (E_{t}^TWE_{t} + \\lambda _a\\mathbb {I})^{-1}E_{t}^TWM$$ (Eq. 12) ",
"$$E_{t+1} \\leftarrow (A_{t}^TA_{t} + \\lambda _b\\mathbb {I})^{-1}A_{t}^TM$$ (Eq. 13) ",
"As presented in Equation 12 , the $W$ matrix is only involved for the updating of $A$ because only the subset of the columns of $E$ , representing the features of the state to predict, are weighted differently in order to increase the importancd of the corresponding columns in the loss function. For the optimization of the latent representation composing $E$ , presented in Equation 13 , each call session's embeddings stored in $A$ hold the same weight, so in this second step of the algorithm, $W$ is actually an identity matrix and so does not appear."
],
[
"The prediction process consists of (1) computing the embedding of a current transition by solving the corresponding least square problem based on the two variables $\\lbrace s_t,z_t\\rbrace $ that correspond to our current knowledge of the state at time $t$ and the set of observations extracted from the last turn that is composed with the system and user utterances, (2) estimating the missing values of interest, i.e. the likelihood of each value of each variable that constitutes the state at time $(t+1)$ , $s_{t+1}$ , by computing the cross-product between the transition embedding calculated in (1) and the corresponding column embeddings of $E$ , and of the value of each variable of $s_{t+1}$ . More precisely, we write this decomposition as ",
"$$M = A.E^T$$ (Eq. 15) ",
"where $M$ is the matrix of data to decompose and $.$ the matrix-matrix product operator. As in the previous section, $A$ has a row for each transition embedding, and $E$ has a column for each variable-value embedding in the form of a zero-one encoding. When a new row of observations $m_i$ for a new set of variables state $s_i$ and observations $z_i$ and $E$ is fixed, the purpose of the prediction task is to find the row $a_i$ of $A$ such that: ",
"$$a_i.E^T \\approx m^T_i$$ (Eq. 16) ",
"Even if it is generally difficult to require these to be equal, we can require that these last elements have the same projection into the latent space: ",
"$$a_i^T.E^T.E = m_i^T.E$$ (Eq. 17) ",
"Then, the classic closed form solution of a linear regression task can be derived: ",
"$$a_i^T = m_i^T.E.(E^T.E)^{-1} \\\\\na_i = (E^T.E)^{-1}.E^T.m_i$$ (Eq. 18) ",
"In fact, Equation 18 is the optimal value of the embedding of the transition $m_i$ , assuming a quadratic loss is used. Otherwise it is an approximation, in the case of a matrix decomposition of $M$ using a logistic loss for example. Note that, in equation 18 , $\n(E^T.E)^{-1}$ requires a matrix inversion, but for a low dimensional matrix (the size of the latent space). Several advantages can be identified in this approach. First, at learning time, alternative ridge regression is computationally efficient because a closed form solution exists at each step of the optimization process employed to infer the parameters, i.e the low rank matrices, of the model. Second, at decision time, the state tracking procedure consists of (1) computing the embedding $a$ of the current transition using the current state estimation $s_t$ and the current observation set $z_t$ and (2) computing the distribution over the state defined as a vector-matrix product between $a$ and the latent matrix $E$ . Finally, this inference method can be partially associated to the general technique of matrix completion. But, a proper matrix completion task would have required a matrix $M$ with missing value corresponding to the exhausive list of the possible triples ${s_t, s_{t+1}, z_t}$ , which is obviously intractable to represent and decompose."
],
[
"In a first section, the dialog domain used for the evaluation of our dialog tracker is described and the different probability models used for the domain. In a second section, we present a first set of experimental results obtained through the proposed approach and its comparison to several reported results of approaches of the state of the art."
],
[
"We used the DSTC-2 dialog domain as described in BIBREF21 in which the user queries a database of local restaurants by interacting with a dialog system. The dataset for the restaurant information domain were originally collected using Amazon Mechanical Turk. A usual dialog proceeds as follows: first, the user specifies his personal set of constraints concerning the restaurant he looks for. Then, the system offers the name of a restaurant that satisfies the constraints. User then accepts the offer, and requests for additional information about accepted restaurant. The dialog ends when all the information requested by the user are provided. In this context, the dialog state tracker should be able to track several types of information that composes the state like the geographic area, the food type, the name and the price range slots. In this paper, we restrict ourselves to tracking these variables, but our tracker can be easily setup to track others as well if they are properly specified. The dialog state tracker updates its belief turn by turn, receiving evidence from the NLU module with the actual utterance produced by the user. In this experiment, it has been chosen to restrict the output of the NLU module to the bag of word of the user utterances in order to be comparable the most recent approaches of state tracking like proposed in BIBREF5 that only use such information as evidence. One important interest in such approach is to dramatically simplify the process of state tracking by suppressing the NLU task. In fact, NLU is mainly formalized in current approaches as a supervised learning approach. The task of the dialog state tracker is to generate a set of possible states and their confidence scores for each slot, with the confidence score corresponding to the posterior probability of each variable state w.r.t the current estimation of the state and the current evidence. Finally, the dialog state tracker also maintains a special variable state, called None, which represents that a given variable composing the state has not been observed yet. For the rest of this section, we present experimental results of state tracking obtained in this dataset and we compare with state of the art generative and discriminative approaches."
],
[
"As a comparison to the state of the art methods, Table 1 presents accuracy results of the best Collective Matrix Factorization model, with a latent space dimension of 350, which has been determined by cross-validation on a development set, where the value of each slot is instantiated as the most probable w.r.t the inference procedure presented in Section \"Spectral decomposition model for state tracking in slot-filling dialogs\" . In our experiments, the variance is estimated using standard dataset reshuffling. The same results are obtained for several state of the art methods of generative and discriminative state tracking on this dataset using the publicly available results as reported in BIBREF22 . More precisely, as provided by the state-of-the-art approaches, the accuracy scores computes $p(s^*_{t+1}|s_t,z_t)$ commonly name the joint goal. Our proposition is compared to the 4 baseline trackers provided by the DSTC organisers. They are the baseline tracker (Baseline), the focus tracker (Focus), the HWU tracker (HWU) and the HWU tracker with “original” flag set to (HWU+) respectively. Then a comparison to a maximum entropy (MaxEnt) proposed in BIBREF23 type of discriminative model and finally a deep neural network (DNN) architecture proposed in BIBREF24 as reported also in BIBREF22 is presented."
],
[
"As depicted in Section \"Generative Dialog State Tracking\" , the litterature of the domain can mainly decomposed into three family of approaches, rule-based, generative and discriminative. In previous works on this topics, BIBREF25 formally used particle filters to perform inference in a Bayesian network modeling of the dialog state, BIBREF26 presented a generative tracker and showed how to train an observation model from transcribed data, BIBREF27 grouped indistinguishable dialog states into partitions and consequently performed dialog state tracking on these partitions instead of the individual states, BIBREF11 used a dynamic Bayesian network to represent the dialog model in an approximate form. So, most attention in the dialog state belief tracking literature has been given to generative Bayesian network models until recently as proposed in BIBREF28 and BIBREF11 . On the other hand, the successful use of discriminative models for belief tracking has recently been reported by BIBREF29 and BIBREF5 and was a major theme in the results of the recent edition of the Dialog State Tracking Challenge. In this paper, a latent decomposition type of approach is proposed in order to address this general problem of dialog system. Our method gives encouraging results in comparison to the state of the art dataset and also does not required complex inference at test time because, as detailed in Section \"Spectral decomposition model for state tracking in slot-filling dialogs\" , the tracking algorithm hold a linear complexity w.r.t the sum of realization of each considered variables defining the state to track which is what we believe is one of the main advantage of this method. Secondly collective matrix factorization paradigm also for data fusion and bias by data type of modeling as successfully performed in matrix factorization based recommender systems BIBREF30 ."
],
[
"In this paper, a methodology and algorithm for efficient state tracking in the context of slot-filling dialogs has been presented. The proposed probabilistic model and inference algorithm allows efficient handling of dialog management in the context of classic dialog schemes that constitute a large part of task-oriented dialog tasks. More precisely, such a system allows efficient tracking of hidden variables defining the user goal using any kind of available evidence, from utterance bag-of-words to the output of a Natural Language Understanding module. Our current investigation on this subject are the beneficiary of distributional word representation as proposed in BIBREF31 to cope with the question of unknown words and unknown slots as suggested in BIBREF32 . In summary, the proposed approach differentiates itself by the following points from the prior art: (1) by producing a joint probability model of the hidden variable transition in a given dialog state and the observations that allow tracking the current beliefs about the user goals while explicitly considering potential interdependencies between state variables (2) by proposing the necessary computational framework, based on collective matrix factorization, to efficiently infer the distribution over the state variables in order to derive an adequate dialog policy of information seeking in this context. Finally, while transactional dialog tracking is mainly useful in the context of autonomous dialog management, the technology can also be used in dialog machine reading and knowledge extraction from human-to-human dialog corpora as proposed in the fourth edition of the Dialog State Tracking Challenge."
]
]
}
|
{
"question": [
"What state-of-the-art models are compared against?"
],
"question_id": [
"73abb173a3cc973ab229511cf53b426865a2738b"
],
"nlp_background": [
"infinity"
],
"topic_background": [
"familiar"
],
"paper_read": [
"no"
],
"search_query": [
"dialog"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"a deep neural network (DNN) architecture proposed in BIBREF24 ",
"maximum entropy (MaxEnt) proposed in BIBREF23 type of discriminative model"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"As a comparison to the state of the art methods, Table 1 presents accuracy results of the best Collective Matrix Factorization model, with a latent space dimension of 350, which has been determined by cross-validation on a development set, where the value of each slot is instantiated as the most probable w.r.t the inference procedure presented in Section \"Spectral decomposition model for state tracking in slot-filling dialogs\" . In our experiments, the variance is estimated using standard dataset reshuffling. The same results are obtained for several state of the art methods of generative and discriminative state tracking on this dataset using the publicly available results as reported in BIBREF22 . More precisely, as provided by the state-of-the-art approaches, the accuracy scores computes $p(s^*_{t+1}|s_t,z_t)$ commonly name the joint goal. Our proposition is compared to the 4 baseline trackers provided by the DSTC organisers. They are the baseline tracker (Baseline), the focus tracker (Focus), the HWU tracker (HWU) and the HWU tracker with “original” flag set to (HWU+) respectively. Then a comparison to a maximum entropy (MaxEnt) proposed in BIBREF23 type of discriminative model and finally a deep neural network (DNN) architecture proposed in BIBREF24 as reported also in BIBREF22 is presented."
],
"highlighted_evidence": [
"Then a comparison to a maximum entropy (MaxEnt) proposed in BIBREF23 type of discriminative model and finally a deep neural network (DNN) architecture proposed in BIBREF24 as reported also in BIBREF22 is presented.\n\n"
]
}
],
"annotation_id": [
"0f1c509049f53c831e6715cbbe308ae4340e1b37"
],
"worker_id": [
"f320efb1fbb744616e420aaf8da0f9622b75b2ed"
]
}
]
}
|
{
"caption": [
"Figure 1: Prototypical transactional dialog management process, also called slot-filling dialog management",
"Figure 2: Generative Dialog State Tracking using a factorial HMM",
"Figure 3: Spectral State Tracking, Collective Matrix Factorization model as inference procedure",
"Table 1: Accuracy of the proposed model on the DSTC-2 test-set"
],
"file": [
"4-Figure1-1.png",
"4-Figure2-1.png",
"6-Figure3-1.png",
"9-Table1-1.png"
]
}
|
2002.00876
|
Torch-Struct: Deep Structured Prediction Library
|
The literature on structured prediction for NLP describes a rich collection of distributions and algorithms over sequences, segmentations, alignments, and trees; however, these algorithms are difficult to utilize in deep learning frameworks. We introduce Torch-Struct, a library for structured prediction designed to take advantage of and integrate with vectorized, auto-differentiation based frameworks. Torch-Struct includes a broad collection of probabilistic structures accessed through a simple and flexible distribution-based API that connects to any deep learning model. The library utilizes batched, vectorized operations and exploits auto-differentiation to produce readable, fast, and testable code. Internally, we also include a number of general-purpose optimizations to provide cross-algorithm efficiency. Experiments show significant performance gains over fast baselines and case-studies demonstrate the benefits of the library. Torch-Struct is available at this https URL.
|
{
"section_name": [
"Introduction",
"Related Work",
"Motivating Case Study",
"Library Design",
"Technical Approach ::: Conditional Random Fields",
"Technical Approach ::: Dynamic Programming and Semirings",
"Optimizations",
"Optimizations ::: a) Parallel Scan Inference",
"Optimizations ::: b) Vectorized Parsing",
"Optimizations ::: c) Semiring Matrix Operations",
"Conclusion and Future Work",
"Acknowledgements"
],
"paragraphs": [
[
"Structured prediction is an area of machine learning focusing on representations of spaces with combinatorial structure, and algorithms for inference and parameter estimation over these structures. Core methods include both tractable exact approaches like dynamic programming and spanning tree algorithms as well as heuristic techniques such linear programming relaxations and greedy search.",
"Structured prediction has played a key role in the history of natural language processing. Example methods include techniques for sequence labeling and segmentation BIBREF0, BIBREF4, discriminative dependency and constituency parsing BIBREF10, BIBREF8, unsupervised learning for labeling and alignment BIBREF11, BIBREF12, approximate translation decoding with beam search BIBREF9, among many others.",
"In recent years, research into deep structured prediction has studied how these approaches can be integrated with neural networks and pretrained models. One line of work has utilized structured prediction as the final final layer for deep models BIBREF13, BIBREF14. Another has incorporated structured prediction within deep learning models, exploring novel models for latent-structure learning, unsupervised learning, or model control BIBREF15, BIBREF16, BIBREF17. We aspire to make both of these use-cases as easy to use as standard neural networks.",
"The practical challenge of employing structured prediction is that many required algorithms are difficult to implement efficiently and correctly. Most projects reimplement custom versions of standard algorithms or focus particularly on a single well-defined model class. This research style makes it difficult to combine and try out new approaches, a problem that has compounded with the complexity of research in deep structured prediction.",
"With this challenge in mind, we introduce Torch-Struct with three specific contributions:",
"Modularity: models are represented as distributions with a standard flexible API integrated into a deep learning framework.",
"Completeness: a broad array of classical algorithms are implemented and new models can easily be added in Python.",
"Efficiency: implementations target computational/memory efficiency for GPUs and the backend includes extensions for optimization.",
"In this system description, we first motivate the approach taken by the library, then present a technical description of the methods used, and finally present several example use cases."
],
[
"Several software libraries target structured prediction. Optimization tools, such as SVM-struct BIBREF18, focus on parameter estimation. Model libraries, such as CRFSuite BIBREF19 or CRF++ BIBREF20, implement inference for a fixed set of popular models, such as linear-chain CRFs. General-purpose inference libraries, such as PyStruct BIBREF21 or TurboParser BIBREF22, utilize external solvers for (primarily MAP) inference such as integer linear programming solvers and ADMM. Probabilistic programming languages, for example languages that integrate with deep learning such as Pyro BIBREF23, allow for specification and inference over some discrete domains. Most ambitiously, inference libraries such as Dyna BIBREF24 allow for declarative specifications of dynamic programming algorithms to support inference for generic algorithms. Torch-Struct takes a different approach and integrates a library of optimized structured distributions into a vectorized deep learning system. We begin by motivating this approach with a case study."
],
[
"While structured prediction is traditionally presented at the output layer, recent applications have deployed structured models broadly within neural networks BIBREF15, BIBREF25, BIBREF16. Torch-Struct aims to encourage this general use case.",
"To illustrate, we consider a latent tree model. ListOps BIBREF26 is a dataset of mathematical functions. Each data point consists of a prefix expression $x$ and its result $y$, e.g.",
"Models such as a flat RNN will fail to capture the hierarchical structure of this task. However, if a model can induce an explicit latent $z$, the parse tree of the expression, then the task is easy to learn by a tree-RNN model $p(y | x, z)$ BIBREF16, BIBREF27.",
"A popular approach is a latent-tree RL model which we briefly summarize. The objective is to maximize the probability of the correct prediction under the expectation of a prior tree model, $p(z|x ;\\phi )$,",
"Computing the expectation is intractable so policy gradient is used. First a tree is sampled $\\tilde{z} \\sim p(z | x;\\phi )$, then the gradient with respect to $\\phi $ is approximated as,",
"where $b$ is a variance reduction baseline. A common choice is the self-critical baseline BIBREF28,",
"Finally an entropy regularization term is added to the objective encourage exploration of different trees, $ O + \\lambda \\mathbb {H}(p(z\\ |\\ x;\\phi ))$.",
"Even in this brief overview, we can see how complex a latent structured learning problem can be. To compute these terms, we need 5 different properties of the tree model $p(z\\ | x; \\phi )$:",
"[description]font=",
"[itemsep=-2pt]",
"Policy gradient, $\\tilde{z} \\sim p(z \\ |\\ x ; \\phi )$",
"Score policy samples, $p(z \\ | \\ x; \\phi )$",
"Backpropagation, $\\frac{\\partial }{\\partial \\phi } p(z\\ |\\ x; \\phi )$",
"Self-critical, $\\arg \\max _z p(z \\ |\\ x;\\phi )$",
"Objective regularizer, $\\mathbb {H}(p(z\\ |\\ x;\\phi ))$",
"For structured models, each of these terms is non-trivial to compute. A goal of Torch-Struct is to make it seamless to deploy structured models for these complex settings. To demonstrate this, Torch-Struct includes an implementation of this latent-tree approach. With a minimal amount of user code, the implementation achieves near perfect accuracy on the ListOps dataset."
],
[
"The library design of Torch-Struct follows the distributions API used by both TensorFlow and PyTorch BIBREF29. For each structured model in the library, we define a conditional random field (CRF) distribution object. From a user's standpoint, this object provides all necessary distributional properties. Given log-potentials (scores) output from a deep network $\\ell $, the user can request samples $z \\sim \\textsc {CRF}(\\ell )$, probabilities $\\textsc {CRF}(z;\\ell )$, modes $\\arg \\max _z \\textsc {CRF}(\\ell )$, or other distributional properties such as $\\mathbb {H}(\\textsc {CRF}(\\ell ))$. The library is agnostic to how these are utilized, and when possible, they allow for backpropagation to update the input network. The same distributional object can be used for standard output prediction as for more complex operations like attention or reinforcement learning.",
"Figure FIGREF11 demonstrates this API for a binary tree CRF over an ordered sequence, such as $p(z \\ | \\ y ;\\phi )$ from the previous section. The distribution takes in log-potentials $\\ell $ which score each possible span in the input. The distribution converts these to probabilities of a specific tree. This distribution can be queried for predicting over the set of trees, sampling a tree for model structure, or even computing entropy over all trees.",
"Table TABREF2 shows all of the structures and distributions implemented in Torch-Struct. While each is internally implemented using different specialized algorithms and optimizations, from the user's perspective they all utilize the same external distributional API, and pass a generic set of distributional tests. This approach hides the internal complexity of the inference procedure, while giving the user full access to the model."
],
[
"We now describe the technical approach underlying the library. To establish notation first consider the implementation of a categorical distribution, Cat($\\ell $), with one-hot categories $z$ with $z_i = 1$ from a set $\\cal Z$ and probabilities given by the softmax,",
"Define the log-partition as $A(\\ell ) = \\mathrm {LSE}(\\ell )$, i.e. log of the denominator, where $\\mathrm {LSE}$ is the log-sum-exp operator. Computing probabilities or sampling from this distribution, requires enumerating $\\cal Z$ to compute the log-partition $A$. A useful identity is that derivatives of $A$ yield category probabilities,",
"Other distributional properties can be similarly extracted from variants of the log-partition. For instance, define $A^*(\\ell ) = \\log \\max _{j=1}^K \\exp \\ell _j$ then: $\\mathbb {I}(z^*_i = 1) = \\frac{\\partial }{\\partial \\ell _i} A^*(\\ell ) $.",
"Conditional random fields, CRF($\\ell $), extend the softmax to combinatorial spaces where ${\\cal Z}$ is exponentially sized. Each $z$, is now represented as a binary vector over polynomial-sized set of parts, $\\cal P$, i.e. ${\\cal Z} \\subset \\lbrace 0, 1\\rbrace ^{|\\cal P|}$. Similarly log-potentials are now defined over parts $\\ell \\in \\mathbb {R}^{|\\cal P|}$. For instance, in Figure FIGREF11 each span is a part and the $\\ell $ vector is shown in the top-left figure. Define the probability of a structure $z$ as,",
"Computing probabilities or sampling from this distribution, requires computing the log-partition term $A$. In general computing this term is now intractable, however for many core algorithms in NLP there are exist efficient combinatorial algorithms for this term (as enumerated in Table TABREF2).",
"Derivatives of the log-partition again provide distributional properties. For instance, the marginal probabilities of parts are given by,",
"Similarly derivatives of $A^*$ correspond to whether a part appears in the argmax structure. $\\mathbb {I}(z^*_p = 1) = \\frac{\\partial }{\\partial \\ell _p} A^*(\\ell ) $.",
"While these gradient identities are well-known BIBREF30, they are not commonly deployed. Computing CRF properties is typically done through two-step specialized algorithms, such as forward-backward, inside-outside, or similar variants such as viterbi-backpointers BIBREF31. In our experiments, we found that using these identities with auto-differentiation on GPU was often faster, and much simpler, than custom two-pass approaches. Torch-Struct is thus designed around using gradients for distributional computations."
],
[
"Torch-Struct is a collection of generic algorithms for CRF inference. Each CRF distribution object, $\\textsc {CRF}(\\ell )$, is constructed by providing $\\ell \\in \\mathbb {R}^{|{\\cal P}|}$ where the parts $\\cal P$ are specific to the type of distribution. Internally, each distribution is implemented through a single Python function for computing the log-partition function $A(\\ell )$. From this function, the library uses auto-differentiation and the identities from the previous section, to define a complete distribution object. The core models implemented by the library are shown in Table TABREF2.",
"To make the approach concrete, we consider the example of a linear-chain CRF.",
"latent](a)$z_1$; latent, right = of a](b)$z_2$; latent, right = of b](c)$z_3$; (a) – (b) – (c);",
"The model has $C$ labels per node with a length $T=2$ edges utilizing a first-order linear-chain (Markov) model. This model has $2\\times C \\times C$ parts corresponding to edges in the chain, and thus requires $\\ell \\in \\mathbb {R}^{2\\times C \\times C}$. The log-partition function $A(\\ell )$ factors into two reduce computations,",
"Computing this function left-to-right using dynamic programming yield the standard forward algorithm for sequence models. As we have seen, the gradient with respect to $\\ell $ produces marginals for each part, i.e. the probability of a specific labeled edge.",
"We can further extend the same function to support generic semiring dynamic programming BIBREF34. A semiring is defined by a pair $(\\oplus , \\otimes )$ with commutative $\\oplus $, distribution, and appropriate identities. The log-partition utilizes $\\oplus , \\otimes = \\mathrm {LSE}, +$, but we can substitute alternatives.",
"For instance, utilizing the log-max semiring $(\\max , +)$ in the forward algorithm yields the max score. As we have seen, its gradient with respect to $\\ell $ is the argmax sequence, negating the need for a separate argmax (Viterbi) algorithm. Some distributional properties cannot be computed directly through gradient identities but still use a forward-backward style compute structure. For instance, sampling requires first computing the log-partition term and then sampling each part, (forward filtering / backward sampling). We can compute this value by overriding each backpropagation operation for the $\\bigoplus $ to instead compute a sample.",
"Table TABREF16 shows the set of semirings and backpropagation steps for computing different terms of interest. We note that many of the terms necessary in the case-study can be computed with variant semirings, negating the need for specialized algorithms."
],
[
"Torch-Struct aims for computational and memory efficiency. Implemented naively, dynamic programming algorithms in Python are prohibitively slow. As such Torch-Struct provides key primitives to help batch and vectorize these algorithms to take advantage of GPU computation and to minimize the overhead of backpropagating through chart-based dynamic programmming. Figure FIGREF17 shows the impact of these optimizations on the core algorithms."
],
[
"The commutative properties of semiring algorithms allow flexibility in the order in which we compute $A(\\ell )$. Typical implementations of dynamic programming algorithms are serial in the length of the sequence. On parallel hardware, an appealing approach is a parallel scan ordering BIBREF35, typically used for computing prefix sums. To compute, $A(\\ell )$ in this manner we first pad the sequence length $T$ out to the nearest power of two, and then compute a balanced parallel tree over the parts, shown in Figure FIGREF21. Concretely each node layer would compute a semiring matrix multiplication, e.g. $ \\bigoplus _c \\ell _{t, \\cdot , c} \\otimes \\ell _{t^{\\prime }, c, \\cdot }$. Under this approach, we only need $O(\\log N)$ steps in Python and can use parallel GPU operations for the rest. Similar parallel approach can also be used for computing sequence alignment and semi-Markov models."
],
[
"Computational complexity is even more of an issue for parsing algorithms, which cannot be as easily parallelized. The log-partition for parsing is computed with the Inside algorithm. This algorithm must compute each width from 1 through T in serial; however it is important to parallelize each inner step. Assuming we have computed all inside spans of width less than $d$, computing the inside span of width $d$ requires computing for all $i$,",
"In order to vectorize this loop over $i, j$, we reindex the chart. Instead of using a single chart $C$, we split it into two parts: one right-facing $C_r[i, d] = C[i, i+d]$ and one left facing, $C_l[i+d, T-d] = C[i, i+d]$. After this reindexing, the update can be written.",
"Unlike the original, this formula can easily be computed as a vectorized semiring dot product. This allows use to compute $C_r[\\cdot , d]$ in one operation. Variants of this same approach can be used for all the parsing models employed."
],
[
"The two previous optimizations reduce most of the cost to semiring matrix multiplication. In the specific case of the $(\\sum , \\times )$ semiring these can be computed very efficiently using matrix multiplication, which is highly-tuned on GPU hardware. Unfortunately for other semirings, such as log and max, these operations are either slow or very memory inefficient. For instance, for matrices $T$ and $U$ of sized $N \\times M$ and $M \\times O$, we can broadcast with $\\otimes $ to a tensor of size $N \\times M \\times O$ and then reduce dim $M$ by $\\bigoplus $ at a huge memory cost. To avoid this issue, we implement custom CUDA kernels targeting fast and memory efficient tensor operations. For log, this corresponds to computing,",
"where $q = \\max _n T_{m,n} + U_{n, o}$. To optimize this operation on GPU we utilize the TVM language BIBREF36 to layout the CUDA loops and tune it to hardware."
],
[
"We present Torch-Struct, a library for deep structured prediction. The library achieves modularity through its adoption of a generic distributional API, completeness by utilizing CRFs and semirings to make it easy to add new algorithms, and efficiency through core optimizations to vectorize important dynamic programming steps. In addition to the problems discussed so far, Torch-Struct also includes several other example implementations including supervised dependency parsing with BERT, unsupervised tagging, structured attention, and connectionist temporal classification (CTC) for speech. The full library is available at https://github.com/harvardnlp/pytorch-struct.",
"In the future, we hope to support research and production applications employing structured models. We also believe the library provides a strong foundation for building generic tools for interpretablity, control, and visualization through its probabilistic API. Finally, we hope to explore further optimizations to make core algorithms competitive with highly-optimized neural network components."
],
[
"We thank Yoon Kim, Xiang Lisa Li, Sebastian Gehrmann, Yuntian Deng, and Justin Chiu for discussion and feedback on the project. The project was supported by NSF CAREER 1845664, NSF 1901030, and research awards by Sony and AWS."
]
]
}
|
{
"question": [
"Does API provide ability to connect to models written in some other deep learning framework?",
"Is this library implemented into Torch or is framework agnostic?",
"What baselines are used in experiments?",
"What general-purpose optimizations are included?"
],
"question_id": [
"1d9b953a324fe0cfbe8e59dcff7a44a2f93c568d",
"093039f974805952636c19c12af3549aa422ec43",
"8df89988adff57279db10992846728ec4f500eaa",
"94edac71eea1e78add678fb5ed2d08526b51016b"
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
""
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"The library design of Torch-Struct follows the distributions API used by both TensorFlow and PyTorch BIBREF29. For each structured model in the library, we define a conditional random field (CRF) distribution object. From a user's standpoint, this object provides all necessary distributional properties. Given log-potentials (scores) output from a deep network $\\ell $, the user can request samples $z \\sim \\textsc {CRF}(\\ell )$, probabilities $\\textsc {CRF}(z;\\ell )$, modes $\\arg \\max _z \\textsc {CRF}(\\ell )$, or other distributional properties such as $\\mathbb {H}(\\textsc {CRF}(\\ell ))$. The library is agnostic to how these are utilized, and when possible, they allow for backpropagation to update the input network. The same distributional object can be used for standard output prediction as for more complex operations like attention or reinforcement learning."
],
"highlighted_evidence": [
"The library design of Torch-Struct follows the distributions API used by both TensorFlow and PyTorch BIBREF29."
]
}
],
"annotation_id": [
"83b0d2c9df28b611f74cbc625a6fa50df1bba8ae"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "It uses deep learning framework (pytorch)",
"evidence": [
"With this challenge in mind, we introduce Torch-Struct with three specific contributions:",
"Modularity: models are represented as distributions with a standard flexible API integrated into a deep learning framework.",
"Completeness: a broad array of classical algorithms are implemented and new models can easily be added in Python.",
"Efficiency: implementations target computational/memory efficiency for GPUs and the backend includes extensions for optimization."
],
"highlighted_evidence": [
"With this challenge in mind, we introduce Torch-Struct with three specific contributions:\n\nModularity: models are represented as distributions with a standard flexible API integrated into a deep learning framework.\n\nCompleteness: a broad array of classical algorithms are implemented and new models can easily be added in Python.\n\nEfficiency: implementations target computational/memory efficiency for GPUs and the backend includes extensions for optimization."
]
}
],
"annotation_id": [
"363475920554b38997e8edef0aafd969ed8e7fcc"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Typical implementations of dynamic programming algorithms are serial in the length of the sequence",
"Computational complexity is even more of an issue for parsing algorithms, which cannot be as easily parallelized",
"Unfortunately for other semirings, such as log and max, these operations are either slow or very memory inefficient"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Optimizations ::: a) Parallel Scan Inference",
"The commutative properties of semiring algorithms allow flexibility in the order in which we compute $A(\\ell )$. Typical implementations of dynamic programming algorithms are serial in the length of the sequence. On parallel hardware, an appealing approach is a parallel scan ordering BIBREF35, typically used for computing prefix sums. To compute, $A(\\ell )$ in this manner we first pad the sequence length $T$ out to the nearest power of two, and then compute a balanced parallel tree over the parts, shown in Figure FIGREF21. Concretely each node layer would compute a semiring matrix multiplication, e.g. $ \\bigoplus _c \\ell _{t, \\cdot , c} \\otimes \\ell _{t^{\\prime }, c, \\cdot }$. Under this approach, we only need $O(\\log N)$ steps in Python and can use parallel GPU operations for the rest. Similar parallel approach can also be used for computing sequence alignment and semi-Markov models.",
"Optimizations ::: b) Vectorized Parsing",
"Computational complexity is even more of an issue for parsing algorithms, which cannot be as easily parallelized. The log-partition for parsing is computed with the Inside algorithm. This algorithm must compute each width from 1 through T in serial; however it is important to parallelize each inner step. Assuming we have computed all inside spans of width less than $d$, computing the inside span of width $d$ requires computing for all $i$,",
"Optimizations ::: c) Semiring Matrix Operations",
"The two previous optimizations reduce most of the cost to semiring matrix multiplication. In the specific case of the $(\\sum , \\times )$ semiring these can be computed very efficiently using matrix multiplication, which is highly-tuned on GPU hardware. Unfortunately for other semirings, such as log and max, these operations are either slow or very memory inefficient. For instance, for matrices $T$ and $U$ of sized $N \\times M$ and $M \\times O$, we can broadcast with $\\otimes $ to a tensor of size $N \\times M \\times O$ and then reduce dim $M$ by $\\bigoplus $ at a huge memory cost. To avoid this issue, we implement custom CUDA kernels targeting fast and memory efficient tensor operations. For log, this corresponds to computing,"
],
"highlighted_evidence": [
"Parallel Scan Inference\nThe commutative properties of semiring algorithms allow flexibility in the order in which we compute $A(\\ell )$. Typical implementations of dynamic programming algorithms are serial in the length of the sequence.",
"Vectorized Parsing\nComputational complexity is even more of an issue for parsing algorithms, which cannot be as easily parallelized.",
"Semiring Matrix Operations\nThe two previous optimizations reduce most of the cost to semiring matrix multiplication. In the specific case of the $(\\sum , \\times )$ semiring these can be computed very efficiently using matrix multiplication, which is highly-tuned on GPU hardware. Unfortunately for other semirings, such as log and max, these operations are either slow or very memory inefficient. For instance, for matrices $T$ and $U$ of sized $N \\times M$ and $M \\times O$, we can broadcast with $\\otimes $ to a tensor of size $N \\times M \\times O$ and then reduce dim $M$ by $\\bigoplus $ at a huge memory cost."
]
}
],
"annotation_id": [
"41a5e7f9002bc00be615405addaa6e72f4201759"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Parallel Scan Inference",
"Vectorized Parsing",
"Semiring Matrix Operations"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Optimizations ::: a) Parallel Scan Inference",
"Optimizations ::: b) Vectorized Parsing",
"Optimizations ::: c) Semiring Matrix Operations",
"Torch-Struct aims for computational and memory efficiency. Implemented naively, dynamic programming algorithms in Python are prohibitively slow. As such Torch-Struct provides key primitives to help batch and vectorize these algorithms to take advantage of GPU computation and to minimize the overhead of backpropagating through chart-based dynamic programmming. Figure FIGREF17 shows the impact of these optimizations on the core algorithms."
],
"highlighted_evidence": [
"a) Parallel Scan Inference",
"b) Vectorized Parsing",
"c) Semiring Matrix Operations",
"Torch-Struct aims for computational and memory efficiency. Implemented naively, dynamic programming algorithms in Python are prohibitively slow. As such Torch-Struct provides key primitives to help batch and vectorize these algorithms to take advantage of GPU computation and to minimize the overhead of backpropagating through chart-based dynamic programmming."
]
}
],
"annotation_id": [
"0f255bdea6c34801b2ab038ea6710f9481bc417a"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
}
|
{
"caption": [
"Figure 1: Distribution of binary trees over an 1000- token sequence. Coloring shows the marginal probabilities of every span. Torch-Struct is an optimized collection of common CRF distributions used in NLP designed to integrate with deep learning frameworks.",
"Table 1: Models and algorithms implemented in Torch-Struct. Notation is developed in Section 5. Parts are described in terms of sequence lengths N,M , label size C, segment length K, and layers / grammar size L,G. Lines of code (LoC) is from the log-partition (A(`)) implementation. T/S is the tokens per second of a batched computation, computed with batch 32, N = 25, C = 20,K = 5, L = 3 (K80 GPU run on Google Colab).",
"Figure 2: Latent Tree CRF example. (a) Logpotentials ` for each part/span. (b) Marginals for CRF(`) computed by backpropagation. (c) Mode tree argmaxz CRF(z; `). (d) Sampled tree z ∼ CRF(`).",
"Table 2: (Top) Semirings implemented in Torch-Struct. Backprop/Gradients gives overridden backpropagation computation and value computed by this combination. (Bot) Example of gradients from different semirings on sequence alignment with dynamic time warping.",
"Figure 3: Speed impact of optimizations. Time is given in seconds for 10 runs with batch 16 executed on Google Colab. (a) Speed of a linear-chain forward with 20 classes for lengths up to 500. Compares left-to-right ordering to parallel scan. (b) Speed of CKY inside with lengths up to 80. Compares inner loop versus vectorization. (c) Speed of linear-chain forward of length 20 with up to 100 classes. Compares broadcast-reduction versus CUDA semiring kernel. (Baseline memory is exhausted after 100 classes.)",
"Figure 4: Parallel scan implementation of the linearchain CRF inference algorithm. Here ⊕ ⊗ represents a semiring matrix operation and I is padding."
],
"file": [
"1-Figure1-1.png",
"2-Table1-1.png",
"3-Figure2-1.png",
"4-Table2-1.png",
"6-Figure3-1.png",
"6-Figure4-1.png"
]
}
|
1906.10519
|
Embedding Projection for Targeted Cross-Lingual Sentiment: Model Comparisons and a Real-World Study
|
Sentiment analysis benefits from large, hand-annotated resources in order to train and test machine learning models, which are often data hungry. While some languages, e.g., English, have a vast array of these resources, most under-resourced languages do not, especially for fine-grained sentiment tasks, such as aspect-level or targeted sentiment analysis. To improve this situation, we propose a cross-lingual approach to sentiment analysis that is applicable to under-resourced languages and takes into account target-level information. This model incorporates sentiment information into bilingual distributional representations, by jointly optimizing them for semantics and sentiment, showing state-of-the-art performance at sentence-level when combined with machine translation. The adaptation to targeted sentiment analysis on multiple domains shows that our model outperforms other projection-based bilingual embedding methods on binary targeted sentiment tasks. Our analysis on ten languages demonstrates that the amount of unlabeled monolingual data has surprisingly little effect on the sentiment results. As expected, the choice of annotated source language for projection to a target leads to better results for source-target language pairs which are similar. Therefore, our results suggest that more efforts should be spent on the creation of resources for less similar languages to those which are resource-rich already. Finally, a domain mismatch leads to a decreased performance. This suggests resources in any language should ideally cover varieties of domains.
|
{
"section_name": [
"Targeted Sentiment Classification",
"Cross-Lingual Approaches to Sentiment Analysis",
"Bilingual Distributional Models and the Contributions of this Paper",
"Previous Work",
"Machine Translation Based Methods",
"Bilingual Embedding Methods",
"Sentiment Embeddings",
"Targeted Sentiment Analysis",
"Projecting Sentiment Across Languages",
"Sentence-level Model",
"Targeted Model",
"Experiments",
"Datasets and Resources",
"Setting for Experiment 1: Sentence-level Classification",
"Setting for Experiment 2: Targeted Classification",
"Experiment 1: Sentence-level Classification",
"Experiment 2: Targeted Classification",
"Motivation",
"Experimental Setup",
"Results",
"Discussion",
"Conclusion"
],
"paragraphs": [
[
"Opinions are everywhere in our lives. Every time we open a book, read the newspaper, or look at social media, we scan for opinions or form them ourselves. We are cued to the opinions of others, and often use this information to update our own opinions Asch1955,Das2014. This is true on the Internet as much as it is in our face-to-face relationships. In fact, with its wealth of opinionated material available online, it has become feasible and interesting to harness this data in order to automatically identify opinions, which had previously been far more expensive and tedious when the only access to data was offline.",
"Sentiment analysis, sometimes referred to as opinion mining, seeks to create data-driven methods to classify the polarity of a text. The information obtained from sentiment classifiers can then be used for tracking user opinions in different domains Pang2002,Socher2013b,Nakov2013, predicting the outcome of political elections wang2012demo,bakliwal2013, detecting hate speech online Nahar2012,hartung-EtAl:2017:WASSA2017, as well as predicting changes in the stock market Pogolu2016.",
"Sentiment analysis can be modeled as a classification task, especially at sentence- and document-level, or as a sequence-labeling task at target-level. Targeted sentiment analysis aims at predicting the polarity expressed towards a particular entity or sub-aspect of that entity. This is a more realistic view of sentiment, as polarities are directed towards targets, not spread uniformly across sentences or documents. Take the following example, where we mark the sentiment target with green, positive sentiment expressions with blue, and negative sentiment expressions with red.:",
"The café near my house has great coffee but I",
"never go there because the service is terrible.",
"In this sentence, it is not stated what the sentiment towards the target “café” is, while the sentiment of the target “coffee” is positive and that of “service” is negative. In order to correctly classify the sentiment of each target, it is necessary to (1) detect the targets, (2) detect polarity expressions, and (3) resolve the relations between these.",
"In order to model these relationships and test the accuracy of the learned models, hand-annotated resources are typically used for training machine learning algorithms. Resource-rich languages, e. g., English, have high-quality annotated data for both classification and sequence-labeling tasks, as well as for a variety of domains. However, under-resourced languages either completely lack annotated data or have only a few resources for specific domains or sentiment tasks. For instance, for aspect-level sentiment analysis, English has datasets available in the news domain Wiebe2005, product review domain HuandLiu2004,Ding2008,Pontiki2014,Pontiki2015, education domain Welch2016, medical domain Grasser2018, urban neighborhood domain Saeidi2016, and financial Maia2018 domain. Spanish, on the other hand, has only three datasets Agerri2013,Pontiki2016, while Basque and Catalan only have one each for a single domain Barnes2018a. The cost of annotating data can often be prohibitive as training native-speakers to annotate fine-grained sentiment is a long process. This motivates the need to develop sentiment analysis methods capable of leveraging data annotated in other languages."
],
[
"Previous work on cross-lingual sentiment analysis (CLSA) offers a way to perform sentiment analysis in an under-resourced language that does not have any annotated data available. Most methods relied on the availability of large amounts of parallel data to transfer sentiment information across languages. Machine translation (MT), for example, has been the most common approach to cross-lingual sentiment analysis Banea2013,Almeida2015,Zhang2017. Machine translation, however, can be biased towards domains Hua2008,Bertoldi2009,Koehn2017, does not always preserve sentiment Mohammad2016, and requires millions of parallel sentences Gavrila2011,Vaswani2017, which places a limit on which languages can benefit from these approaches. The following example illustrates that MT does not preserve sentiment (hotel review in Basque, automatically translated via translate.google.com):",
"Hotel $^{1}$ txukuna da, nahiko berria. Harreran zeuden langileen arreta $^{2}$ ez zen onena izan. Tren geltoki bat $^{3}$ du 5 minutura eta kotxez $^{4}$ berehala iristen da baina oinez $^{5}$ urruti samar dago.",
"The hotel $^{1}$ is tidy, quite new. The care of the workers at reception $^{2}$ was not the best. It's 5 minutes away from a train station $^{3}$ and it's quick to reach the car $^{4}$ , but it's a short distance away.",
"While the first two sentences are mostly well translated for the purposes of sentiment analysis, in the third, there are a number of reformulations and deletions that lead to a loss of information. It should read “It has a train station five minutes away and by car you can reach it quickly, but by foot it's quite a distance.” We can see that one of the targets has been deleted and the sentiment has flipped from negative to positive. Such common problems degrade the results of cross-lingual sentiment systems that use MT, especially at target-level.",
"Although high quality machine translation systems exist between many languages and have been shown to enable cross-lingual sentiment analysis, for the vast majority of language pairs in the world there is not enough parallel data to create these high quality MT systems. This lack of parallel data coupled with the computational expense of MT means that approaches to cross-lingual sentiment analysis that do not require MT should be preferred. Additionally, most cross-lingual sentiment approaches using MT have concentrated on sentence- and document-level, and have not explored targeted or aspect-level sentiment tasks."
],
[
"Recently, several bilingual distributional semantics models (bilingual embeddings) have been proposed and provide a useful framework for cross-lingual research without requiring machine translation. They are effective at generating features for bilingual dictionary induction Mikolov2013translation,Artetxe2016,Lample2017, cross-lingual text classification Prettenhofer2011b,Chandar2014, or cross-lingual dependency parsing Sogaard2015, among others. In this framework, words are represented as $n$ -dimensional vectors which are created on large monolingual corpora in order to (1) maximize the similarity of words that appear in similar contexts and use some bilingual regularization in order to (2) maximize the similarity of translation pairs. In this work, we concentrate on a subset of these bilingual embedding methods that perform a post-hoc mapping to a bilingual space, which we refer to as embedding projection methods. One of the main advantages of these methods is that they make better use of small amounts of parallel data than MT systems, even enabling unsupervised machine translation Artetxe2018,Lample2018.",
"With this paper, we provide the first extensive evaluation of cross-lingual embeddings for targeted sentiment tasks. We formulate the task of targeted sentiment analysis as classification, given the targets from an oracle. The question we attempt to address is how to infer the polarity of a sentiment target in a language that does not have any annotated sentiment data or parallel corpora with a resource-rich language. In the following Catalan sentence, for example, how can we determine that the sentiment of “servei” is negative, while that of “menjar” is positive if we do not have annotated data in Catalan or parallel data for English-Catalan?",
"El servei al restaurant va ser péssim. Al menys el menjar era bo.",
"Specifically, we propose an approach which requires (1) minimal bilingual data and instead makes use of (2) high-quality monolingual word embeddings in the source and target language. We take an intermediate step by first testing this approach on sentence-level classification. After confirming that our approach performs well at sentence-level, we propose a targeted model with the same data requirements. The main contributions are that we",
"compare projection-based cross-lingual methods to MT,",
"extend previous cross-lingual approaches to enable targeted cross-lingual sentiment analysis with minimal parallel data requirements,",
"compare different model architectures for cross-lingual targeted sentiment analysis,",
"perform a detailed error analysis, and detailing the advantages and disadvantages of each method,",
"and, finally, deploy the methods in a realistic case-study to analyze their suitability beyond applications on (naturally) limited language pairs.",
"In addition, we make our code and data publicly available at https://github.com/jbarnesspain/targeted_blse to support future research. The rest of the article is organized as follows: In Section \"Previous Work\" , we detail related work and motivate the need for a different approach. In Section \"Projecting Sentiment Across Languages\" , we describe both the sentence-level and targeted projection approaches that we propose. In Section \"Experiments\" , we detail the resources and experimental setup for both sentence and targeted classification. In Section \"Results\" , we describe the results of the two experiments, as well as perform a detailed error analysis. In Section \"Case Study: Real World Deployment\" , we perform a case study whose purpose is to give a more qualitative view of the models. Finally, we discuss the implications of the results in Section \"Conclusion\" ."
],
[
"Sentiment analysis has become an enormously popular task with a focus on classification approaches on individual languages, but there has not been as much work on cross-lingual approaches. In this section, we detail the most relevant work on cross-lingual sentiment analysis and lay the basis for the bilingual embedding approach we propose later."
],
[
"Early work in cross-lingual sentiment analysis found that machine translation (MT) had reached a point of maturity that enabled the transfer of sentiment across languages. Researchers translated sentiment lexicons Mihalcea2007,Meng2012 or annotated corpora and used word alignments to project sentiment annotation and create target-language annotated corpora Banea2008,Duh2011a,Demirtas2013,Balahur2014d.",
"Several approaches included a multi-view representation of the data Banea2010,Xiao2012 or co-training Wan2009,Demirtas2013 to improve over a naive implementation of machine translation, where only the translated version of the data is considered. There are also approaches which only require parallel data Meng2012,Zhou2016,Rasooli2017, instead of machine translation.",
"All of these approaches, however, require large amounts of parallel data or an existing high quality translation tool, which are not always available. To tackle this issue, Barnes2016 explore cross-lingual approaches for aspect-based sentiment analysis, comparing machine translation methods and those that instead rely on bilingual vector representations. They conclude that MT approaches outperform current bilingual representation methods.",
"Chen2016 propose an adversarial deep averaging network, which trains a joint feature extractor for two languages. They minimize the difference between these features across languages by learning to fool a language discriminator. This requires no parallel data, but does require large amounts of unlabeled data and has not been tested on fine-grained sentiment analysis."
],
[
"Recently proposed bilingual embedding methods Hermann2014,Chandar2014,Gouws2015 offer a natural way to bridge the language gap. These particular approaches to bilingual embeddings, however, also require large parallel corpora in order to build the bilingual space, which gives no advantage over machine translation. Another approach to creating bilingual word embeddings, which we refer to as Projection-based Bilingual Embeddings, has the advantage of requiring relatively little parallel training data while taking advantage of larger amounts of monolingual data. In the following, we describe the most relevant approaches.",
"Mikolov2013translation find that vector spaces in different languages have similar arrangements. Therefore, they propose a linear projection which consists of learning a rotation and scaling matrix. Artetxe2016,Artetxe2017 improve upon this approach by requiring the projection to be orthogonal, thereby preserving the monolingual quality of the original word vectors.",
"Given source embeddings $S$ , target embeddings $T$ , and a bilingual lexicon $L$ , Artetxe2016 learn a projection matrix $W$ by minimizing the square of Euclidean distances ",
"$$\\operatornamewithlimits{arg\\,min}_W \\sum _{i} ||S^{\\prime }W-T^{\\prime }||_{F}^{2}\\,,$$ (Eq. 13) ",
"where $S^{\\prime } \\in S$ and $T^{\\prime } \\in T$ are the word embedding matrices for the tokens in the bilingual lexicon $L$ . This is solved using the Moore-Penrose pseudoinverse $S^{\\prime +} = (S^{\\prime T}S^{\\prime })^{-1}S^{\\prime T}$ as $ W =\nS^{\\prime +}T^{\\prime }$ , which can be computed using SVD. We refer to this approach as VecMap.",
"Lample2017 propose a similar refined orthogonal projection method to Artetxe2017, but include an adversarial discriminator, which seeks to discriminate samples from the projected space $WS$ , and the target $T$ , while the projection matrix $W$ attempts to prevent this making the projection from the source space $WS$ as similar to the target space $T$ as possible.",
"They further refine their projection matrix by reducing the hubness problem Dinu2015, which is commonly found in high-dimensional spaces. For each projected embedding $Wx$ , they define the $k$ nearest neighbors in the target space, $\\mathcal {N}_{T}$ , suggesting $k = 10$ . They consider the mean cosine similarity $r_{T}(Wx)$ between a projected embedding $Wx$ and its $k$ nearest neighbors ",
"$$r_{T}(Wx) = \\frac{1}{k} \\sum _{y \\in \\mathcal {N}_{T}(Wx) } \\cos (Wx,y)$$ (Eq. 15) ",
"as well as the mean cosine of a target word $y$ to its neighborhood, which they denote by $r_{S}$ .",
"In order to decrease similarity between mapped vectors lying in dense areas, they introduce a cross-domain similarity local scaling term (CSLS) ",
"$$\\textrm {CSLS}(Wx,y) = 2 \\cos (Wx,y) - r_{T}(Wx) - r_{S}(y)\\,,$$ (Eq. 16) ",
"which they find improves accuracy, while not requiring any parameter tuning.",
"Gouws2015taskspecific propose a method to create a pseudo-bilingual corpus with a small task-specific bilingual lexicon, which can then be used to train bilingual embeddings (Barista). This approach requires a monolingual corpus in both the source and target languages and a set of translation pairs. The source and target corpora are concatenated and then every word is randomly kept or replaced by its translation with a probability of 0.5. Any kind of word embedding algorithm can be trained with this pseudo-bilingual corpus to create bilingual word embeddings."
],
[
"Maas2011 first explored the idea of incorporating sentiment information into semantic word vectors. They proposed a topic modeling approach similar to latent Dirichlet allocation in order to collect the semantic information in their word vectors. To incorporate the sentiment information, they included a second objective whereby they maximize the probability of the sentiment label for each word in a labeled document.",
"Tang2014 exploit distantly annotated tweets to create Twitter sentiment embeddings. To incorporate distributional information about tokens, they use a hinge loss and maximize the likelihood of a true $n$ -gram over a corrupted $n$ -gram. They include a second objective where they classify the polarity of the tweet given the true $n$ -gram. While these techniques have proven useful, they are not easily transferred to a cross-lingual setting.",
"Zhou2015 create bilingual sentiment embeddings by translating all source data to the target language and vice versa. This requires the existence of a machine translation system, which is a prohibitive assumption for many under-resourced languages, especially if it must be open and freely accessible. This motivates approaches which can use smaller amounts of parallel data to achieve similar results."
],
[
"The methods discussed so far focus on classifying textual phrases like documents or sentences. Next to these approaches, others have concentrated on classifying aspects HuandLiu2004,Liu2012,Pontiki2014 or targets Zhang2015,Zhang2016,Tang2016 to assign them with polarity values.",
"A common technique when adapting neural architectures to targeted sentiment analysis is to break the text into left context, target, and right context Zhang2015,Zhang2016, alternatively keeping the target as the final/beginning token in the respective contexts Tang2016. The model then extracts a feature vector from each context and target, using some neural architecture, and concatenates the outputs for classification.",
"More recent approaches attempt to augment a neural network with memory to model these interactions Chen2017,Xue2018,Wang2018,Liu2018. Wang2017 explore methods to improve classification of multiple aspects in tweets, while Akhtar2018 attempt to use cross-lingual and multilingual data to improve aspect-based sentiment analysis in under-resourced languages.",
"As mentioned before, MT has traditionally been the main approach for transferring information across language barriers BIBREF0 . But this is particularly problematic for targeted sentiment analysis, as changes in word order or loss of words created during translation can directly affect the performance of a classifier Lambert2015."
],
[
"In this section, we propose a novel approach to incorporate sentiment information into bilingual embeddings, which we first test on sentence-level cross-lingual sentiment classification. We then propose an extension in order to adapt this approach to targeted cross-lingual sentiment classification. Our model, Bilingual Sentiment Embeddings (Blse), are embeddings that are jointly optimized to represent both (a) semantic information in the source and target languages, which are bound to each other through a small bilingual dictionary, and (b) sentiment information, which is annotated on the source language only. We only need three resources: (1) a comparably small bilingual lexicon, (2) an annotated sentiment corpus in the resource-rich language, and (3) monolingual word embeddings for the two involved languages."
],
[
"In this section, we detail the projection objective, the sentiment objective, and finally the full objective for sentence-level cross-lingual sentiment classification. A sketch of the full sentence-level model is depicted in Figure 1 .",
"We assume that we have two precomputed vector spaces $S = \\mathbb {R}^{v \\times d}$ and $T = \\mathbb {R}^{v^{\\prime } \\times d^{\\prime }}$ for our source and target languages, where $v$ ( $v^{\\prime }$ ) is the length of the source vocabulary (target vocabulary) and $d$ ( $d^{\\prime }$ ) is the dimensionality of the embeddings. We also assume that we have a bilingual lexicon $L$ of length $n$ which consists of word-to-word translation pairs $L$ = $\\lbrace (s_{1},t_{1}),\n(s_{2},t_{2}),\\ldots , (s_{n}, t_{n})\\rbrace $ which map from source to target.",
"In order to create a mapping from both original vector spaces $S$ and $T$ to shared sentiment-informed bilingual spaces $\\mathbf {z}$ and $\\mathbf {\\hat{z}}$ , we employ two linear projection matrices, $M$ and $M^{\\prime }$ . During training, for each translation pair in $L$ , we first look up their associated vectors, project them through their associated projection matrix and finally minimize the mean squared error of the two projected vectors. This is similar to the approach taken by Mikolov2013translation , but includes an additional target projection matrix.",
"The intuition for including this second matrix is that a single projection matrix does not support the transfer of sentiment information from the source language to the target language. Without $M^{\\prime }$ , any signal coming from the sentiment classifier (see Section UID27 ) would have no affect on the target embedding space $T$ , and optimizing $M$ to predict sentiment and projection would only be detrimental to classification of the target language. We analyze this further in Section UID63 . Note that in this configuration, we do not need to update the original vector spaces, which would be problematic with such small training data.",
"The projection quality is ensured by minimizing the mean squared error ",
"$$\\textrm {MSE} = \\dfrac{1}{n} \\sum _{i=1}^{n} (\\mathbf {z_{i}} - \\mathbf {\\hat{z}_{i}})^{2}\\,,$$ (Eq. 26) ",
"where $\\mathbf {z_{i}} = S_{s_{i}} \\cdot M$ is the dot product of the embedding for source word $s_{i}$ and the source projection matrix and $\\mathbf {\\hat{z}_{i}} = T_{t_{i}} \\cdot M^{\\prime }$ is the same for the target word $t_{i}$ .",
"We add a second training objective to optimize the projected source vectors to predict the sentiment of source phrases. This inevitably changes the projection characteristics of the matrix $M$ , and consequently $M^{\\prime }$ and encourages $M^{\\prime }$ to learn to predict sentiment without any training examples in the target language.",
"In order to train $M$ to predict sentiment, we require a source-language corpus $C_{\\textrm {source}}= \\lbrace (x_{1}, y_{1}),\n(x_{2}, y_{2}), \\ldots , (x_{i}, y_{i})\\rbrace $ where each sentence $x_{i}$ is associated with a label $y_{i}$ .",
"For classification, we use a two-layer feed-forward averaging network, loosely following Iyyer2015 . For a sentence $x_{i}$ we take the word embeddings from the source embedding $S$ and average them to $\\mathbf {a}_{i} \\in \\mathbb {R}^{d}$ . We then project this vector to the joint bilingual space $\\mathbf {z}_{i} = \\mathbf {a}_{i} \\cdot M$ . Finally, we pass $\\mathbf {z}_{i}$ through a softmax layer $P$ to obtain the prediction $\\hat{y}_{i} = \\textrm {softmax} ( \\mathbf {z}_{i} \\cdot P)$ .",
"To train our model to predict sentiment, we minimize the cross-entropy error of the predictions",
"$$H = - \\sum _{i=1}^{n} y_{i} \\log \\hat{y_{i}} - (1 - y_{i}) \\log (1 - \\hat{y_{i}})\\,.$$ (Eq. 29) ",
"In order to jointly train both the projection component and the sentiment component, we combine the two loss functions to optimize the parameter matrices $M$ , $M^{\\prime }$ , and $P$ by ",
"$$J =\\hspace{-14.22636pt}\\sum _{(x,y) \\in C_{\\textrm {source}}}\\hspace{2.84526pt}\\sum _{(s,t) \\in L}\\hspace{0.0pt}\\alpha H(x,y)\n+ (1 - \\alpha ) \\cdot \\textrm {MSE}(s,t)\\,,$$ (Eq. 31) ",
"where $\\alpha $ is a hyperparameter that weights sentiment loss vs. projection loss.",
"For inference, we classify sentences from a target-language corpus $C_{\\textrm {target}}$ . As in the training procedure, for each sentence, we take the word embeddings from the target embeddings $T$ and average them to $\\mathbf {a}_{i} \\in \\mathbb {R}^{d}$ . We then project this vector to the joint bilingual space $\\mathbf {\\hat{z}}_{i} = \\mathbf {a}_{i} \\cdot M^{\\prime }$ . Finally, we pass $\\mathbf {\\hat{z}}_{i}$ through a softmax layer $P$ to obtain the prediction $\\hat{y}_{i} = \\textrm {softmax} (\n\\mathbf {\\hat{z}}_{i} \\cdot P)$ ."
],
[
"In our targeted model, we assume that the list of sentiment targets as they occur in the text is given. These can be extracted previously either by using domain knowledge Liu2005, by using a named entity recognizer Zhang2015 or by using a number of aspect extraction techniques Zhou2012. Given these targets, the task is reduced to classification. However, what remains is how to represent the target, to learn to subselect the information from the context which is relevant, how to represent this contextual information, and how to combine these representations in a meaningful way that enables us to classify the target reliably.",
"Our approach to adapt the Blse model to targeted sentiment analysis, which we call Split (depicted in Figure 2 ), is similar to the method proposed by Zhang2016 for gated recurrent networks. For a sentence with a target $a$ , we split the sentence at $a$ in order to get a left and right context, $\\textrm {con}_\\ell (a)$ and $\\textrm {con}_r(a)$ respectively.",
"Unlike the approach from Zhang2016, we do not use recurrent neural networks to create a feature vector, as Atrio2019 showed that, in cross-lingual setups, they overfit too much to word order and source-language specific information to perform well on our tasks. Therefore, we instead average each left context $\\textrm {con}_\\ell (a_i)$ , right context $\\textrm {con}_r(a_i)$ , and target $a_{i}$ separately. Although averaging is a simplified approach to create a compositional representation of a phrase, it has been shown to work well for sentiment Iyyer2015,Barnes2017. After creating a single averaged vector for the left context, right context, and target, we concatenate them and use these as input for the softmax classification layer $T \\in \\mathbb {R}^{d \\times 3}$ , where $d$ is the dimensionality of the input vectors. The model is trained on the source language sentiment data using $M$ to project, and then tested by replacing $M$ with $M^{^{\\prime }}$ , similar to the sentence-level model."
],
[
"In this section, we describe the resources and datasets, as well as the experimental setups used in both the sentence-level (Experiment 1 in Subsection \"Setting for Experiment 1: Sentence-level Classification\" ) and targeted (Experiment 2 in Subsection \"Setting for Experiment 2: Targeted Classification\" ) experiments."
],
[
"The number of datasets and resources for under-resourced languages are limited. Therefore, we choose a mixture of resource-rich and under-resourced languages for our experiments. We treat the resource-rich languages as if they were under-resourced by using similar amounts of parallel data.",
"To evaluate our proposed model at sentence-level, we conduct experiments using four benchmark datasets and three bilingual combinations. We use the OpeNER English and Spanish datasets Agerri2013 and the MultiBooked Catalan and Basque datasets BIBREF1 . All datasets contain hotel reviews which are annotated for targeted sentiment analysis. The labels include Strong Negative ( $--$ ), Negative ( $-$ ), Positive ( $+$ ), and Strong Positive ( $++$ ). We map the aspect-level annotations to sentence level by taking the most common label and remove instances of mixed polarity. We also create a binary setup by combining the strong and weak classes. This gives us a total of six experiments. The details of the sentence-level datasets are summarized in Table 1 .",
"For each of the experiments, we take 70 percent of the data for training, 20 percent for testing and the remaining 10 percent are used as development data for tuning meta-parameters.",
"We use the following corpora to set up the experiments in which we train on a source language corpus $C_{S}$ and test on a target language corpus $C_{T}$ . Statistics for all of the corpora are shown in Table 3 . We include a binary classification setup, where neutral has been removed and strong positive and strong negative have been mapped to positive and negative, as well as a multiclass setup, where the original labels are used.",
"OpeNER Corpora: The OpeNER corpora Agerri2013 are composed of hotel reviews, annotated for aspect-based sentiment. Each aspect is annotated with a sentiment label (Strong Positive, Positive, Negative, Strong Negative). We perform experiments with the English and Spanish versions.",
"MultiBooked Corpora: The MultiBooked corpora Barnes2018a are also hotel reviews annotated in the same way as the OpeNER corpora, but in Basque and Catalan. These corpora allow us to observe how well each approach performs on low-resource languages.",
"SemEval 2016 Task 5: We take the English and Spanish restaurant review corpora made available by the organizers of the SemEval event Pontiki2016. These corpora are annotated for three levels of sentiment (positive, neutral, negative).",
"USAGE Corpora: The USAGE corpora Klinger2014a are Amazon reviews taken from a number of different items, and are available in English and German. Each aspect is annotated for three levels of sentiment (positive, neutral, negative). As the corpus has two sets of annotations available, we take the annotations from annotator 1 as the gold standard.",
"For Blse, VecMap, Muse, and MT, we require monolingual vector spaces for each of our languages. For English, we use the publicly available GoogleNews vectors. For Spanish, Catalan, and Basque, we train skip-gram embeddings using the Word2Vec toolkit with 300 dimensions, subsampling of $10^{-4}$ , window of 5, negative sampling of 15 based on a 2016 Wikipedia corpus (sentence-split, tokenized with IXA pipes Agerri2014 and lowercased). The statistics of the Wikipedia corpora are given in Table 2 .",
"For Blse, VecMap, Muse, and Barista, we also require a bilingual lexicon. We use the sentiment lexicon from HuandLiu2004 (to which we refer in the following as Hu and Liu) and its translation into each target language. We translate the lexicon using Google Translate and exclude multi-word expressions. This leaves a dictionary of 5700 translations in Spanish, 5271 in Catalan, and 4577 in Basque. We set aside ten percent of the translation pairs as a development set in order to check that the distances between translation pairs not seen during training are also minimized during training."
],
[
"We compare Blse (Sections UID23 – UID30 ) to VecMap, Muse, and Barista (Section \"Previous Work\" ) as baselines, which have similar data requirements and to machine translation (MT) and monolingual (Mono) upper bounds which request more resources. For all models (Mono, MT, VecMap, Muse, Barista), we take the average of the word embeddings in the source-language training examples and train a linear SVM. We report this instead of using the same feed-forward network as in Blse as it is the stronger upper bound. We choose the parameter $c$ on the target language development set and evaluate on the target language test set.",
"Upper Bound Mono. We set an empirical upper bound by training and testing a linear SVM on the target language data. Specifically, we train the model on the averaged embeddings from target language training data, tuning the $c$ parameter on the development data. We test on the target language test data.",
"Upper Bound MT. To test the effectiveness of machine translation, we translate all of the sentiment corpora from the target language to English using the Google Translate API. Note that this approach is not considered a baseline, as we assume not to have access to high-quality machine translation for low-resource languages of interest.",
"Baseline Unsup We compare with the unsupervised statistical machine translation approach proposed by artetxe2018emnlp. This approach uses a self-supervised method to create bilingual phrase embeddings which then populates a phrase table. Monolingual n-gram language models and an unsupervised variant of MERT are used to create a MT model which is improved through iterative backtranslation. We use the Wikipedia corpora from Section UID42 to create the unsupervised SMT system between English and the target languages and run the training proceedure with default parameters. Finally, we translate all test examples in the target languages to English.",
"Baseline VecMap. We compare with the approach proposed by Artetxe2016 which has shown promise on other tasks, e. g., word similarity. In order to learn the projection matrix $W$ , we need translation pairs. We use the same word-to-word bilingual lexicon mentioned in Section UID23 . We then map the source vector space $S$ to the bilingual space $\\hat{S} = SW$ and use these embeddings.",
"Baseline Muse. This baseline is similar to VecMap but incorporates and adversarial objective as well as a localized scaling objective, which further improve the orthogonal refinement so that the two language spaces are even more similar.",
"Baseline Barista. The approach proposed by Gouws2015taskspecific is another appropriate baseline, as it fulfills the same data requirements as the projection methods. The bilingual lexicon used to create the pseudo-bilingual corpus is the same word-to-word bilingual lexicon mentioned in Section UID23 . We follow the authors' setup to create the pseudo-bilingual corpus. We create bilingual embeddings by training skip-gram embeddings using the Word2Vec toolkit on the pseudo-bilingual corpus using the same parameters from Section UID42 .",
"Our method: BLSE. Our model, Blse, is implemented in Pytorch Pytorch and the word embeddings are initialized with the pretrained word embeddings $S$ and $T$ mentioned in Section UID42 . We use the word-to-word bilingual lexicon from Section UID46 , tune the hyperparameters $\\alpha $ , training epochs, and batch size on the target development set and use the best hyperparameters achieved on the development set for testing. ADAM Kingma2014a is used in order to minimize the average loss of the training batches.",
"Ensembles. In order to evaluate to what extent each projection model adds complementary information to the machine translation approach, we create an ensemble of MT and each projection method (Blse, VecMap, Muse, Barista). A random forest classifier is trained on the predictions from MT and each of these approaches."
],
[
"For the targeted classification experiment, we compare the same models mentioned above, but adapted to the setting using the Split method from Section \"Targeted Model\" .",
"A simple majority baseline sets the lower bound, while the MT-based model serves as an upper bound. We assume our models to perform between these two, as we do not have access to the millions of parallel sentences required to perform high-quality MT and particularly aim at proposing a method which is less resource-hungry.",
"We hypothesize that cross-lingual approaches are particularly error-prone when evaluative phrases and words are wrongly predicted. In such settings, it might be beneficial for a model to put emphasis on the target word itself and learn a prior distribution of sentiment for each target independent of the context. For example, if you assume that all mentions of Steven Segal are negative in movie reviews, it is possible to achieve good results Bird2009. On the other hand, it may be that there are not enough examples of target-context pairs, and that it is better to ignore the target and concentrate only on the contexts.",
"To analyze this, we compare our model to two simplified versions. In addition, this approach enables us to gain insight in the source of relevant information. The first is Target-only, which means that we use the model in the same way as before but ignore the context completely. This serves as a tool to understand how much model performance originates from the target itself.",
"In the same spirit, we use a Context-only model, which ignores the target by constraining the parameters of all target phrase embeddings to be the same. This approach might be beneficial over our initial model if the prior distribution between targets was similar and the context actually carries the relevant information.",
"As the baseline for each projection method, we assume all targets in each sentence respectively to be of the same polarity (Sent). This is generally an erroneous assumption, but can give good results if all of the targets in a sentence have the same polarity. In addition, this baseline provides us with the information about whether the models are able to handle information from different positions in the text."
],
[
"In Table 4 , we report the results of all four methods. Our method outperforms the other projection methods (the baselines VecMap, Muse, and Barista) on four of the six experiments substantially. It performs only slightly worse than the more resource-costly upper bounds (MT and Mono). This is especially noticeable for the binary classification task, where Blse performs nearly as well as machine translation and significantly better than the other methods. Unsup also performs similarly to Blse on the binary tasks, while giving stronger performance on the 4-class setup. We perform approximate randomization tests Yeh2000 with 10,000 runs and highlight the results that are statistically significant (*p $<$ 0.01) in Table 4 .",
"In more detail, we see that MT generally performs better than the projection methods (79–69 $\\text{F}_1$ on binary, 52–44 on 4-class). Blse (75–69 on binary, 41–30 on 4-class) has the best performance of the projection methods and is comparable with MT on the binary setup, with no significant difference on binary Basque. VecMap (67–46 on binary, 35–21 on 4-class) and Barista (61–55 on binary, 40–34 on 4-class) are significantly worse than Blse on all experiments except Catalan and Basque 4-class. Muse (67–62 on binary, 45–34 on 4-class) performs better than VecMap and Barista. On the binary experiment, VecMap outperforms Barista on Spanish (67.1 vs. 61.2) and Catalan (60.7 vs. 60.1) but suffers more than the other methods on the four-class experiments, with a maximum $\\text{F}_1$ of 34.9. Barista is relatively stable across languages. Unsup performs well across experiments (76–65 on binary, 49–39 on 4-class), even performing better than MT on both Catalan tasks and Spanish 4-class.",
"The Ensemble of MT and Blse performs the best, which shows that Blse adds complementary information to MT. Finally, we note that all systems perform worse on Basque. This is presumably due to the increased morphological complexity of Basque, as well as its lack of similarity to the source language English (Section UID102 ).",
"We analyze three aspects of our model in further detail: 1) where most mistakes originate, 2) the effect of the bilingual lexicon, and 3) the effect and necessity of the target-language projection matrix $M^{\\prime }$ .",
"In order to analyze where each model struggles, we categorize the mistakes and annotate all of the test phrases with one of the following error classes: vocabulary (voc), adverbial modifiers (mod), negation (neg), external knowledge (know) or other. Table 5 shows the results.",
"Vocabulary: The most common way to express sentiment in hotel reviews is through the use of polar adjectives (as in “the room was great”) or the mention of certain nouns that are desirable (“it had a pool”). Although this phenomenon has the largest total number of mistakes (an average of 72 per model on binary and 172 on 4-class), it is mainly due to its prevalence. MT performed the best on the test examples which according to the annotation require a correct understanding of the vocabulary (81 $\\text{F}_1$ on binary /54 $\\text{F}_1$ on 4-class), with Blse (79/48) slightly worse. Muse (76/23), VecMap (70/35), and Barista (67/41) perform worse. This suggests that Blse is better than Muse, VecMap and Barista at transferring sentiment of the most important sentiment bearing words.",
"Negation: Negation is a well-studied phenomenon in sentiment analysis Pang2002,Wiegand2010,Zhu2014,Reitan2015 . Therefore, we are interested in how these four models perform on phrases that include the negation of a key element, for example “In general, this hotel isn't bad\". We would like our models to recognize that the combination of two negative elements “isn't\" and “bad\" lead to a Positive label.",
"Given the simple classification strategy, all models perform relatively well on phrases with negation (all reach nearly 60 $\\text{F}_1$ in the binary setting). However, while Blse performs the best on negation in the binary setting (82.9 $\\text{F}_1$ ), it has more problems with negation in the 4-class setting (36.9 $\\text{F}_1$ ).",
"Adverbial Modifiers: Phrases that are modified by an adverb, e. g., the food was incredibly good, are important for the four-class setup, as they often differentiate between the base and Strong labels. In the binary case, all models reach more than 55 $\\text{F}_1$ . In the 4-class setup, Blse only achieves 27.2 $\\text{F}_1$ compared to 46.6 or 31.3 of MT and Barista, respectively. Therefore, presumably, our model does currently not capture the semantics of the target adverbs well. This is likely due to the fact that it assigns too much sentiment to functional words (see Figure 6 ). Muse performs poorly on modified examples (20.3 $\\text{F}_1$ ).",
"External Knowledge Required: These errors are difficult for any of the models to get correct. Many of these include numbers which imply positive or negative sentiment (350 meters from the beach is Positive while 3 kilometers from the beach is Negative). Blse performs the best (63.5 $\\text{F}_1$ ) while MT performs comparably well (62.5). Barista performs the worst (43.6).",
"Binary vs. 4-class: All of the models suffer when moving from the binary to 4-class setting; an average of 26.8 in macro $\\text{F}_1$ for MT, 31.4 for VecMap, 22.2 for Barista, 34.1 for Muse, and 36.6 for Blse. The vector projection methods (VecMap, Muse, and Blse) suffer the most, suggesting that they are currently more apt for the binary setting.",
"We analyze how the number of translation pairs affects our model. We train on the 4-class Spanish setup using the best hyper-parameters from the previous experiment.",
"Research into projection techniques for bilingual word embeddings Mikolov2013translation,Lazaridou2015,Artetxe2016 often uses a lexicon of the most frequent 8–10 thousand words in English and their translations as training data. We test this approach by taking the 10,000 word-to-word translations from the Apertium English-to-Spanish dictionary. We also use the Google Translate API to translate the NRC hashtag sentiment lexicon Mohammad2013 and keep the 22,984 word-to-word translations. We perform the same experiment as above and vary the amount of training data from 0, 100, 300, 600, 1000, 3000, 6000, 10,000 up to 20,000 training pairs. Finally, we compile a small hand translated dictionary of 200 pairs, which we then expand using target language morphological information, finally giving us 657 translation pairs. The macro $\\text{F}_1$ score for the Hu and Liu dictionary climbs constantly with the increasing translation pairs. Both the Apertium and NRC dictionaries perform worse than the translated lexicon by Hu and Liu, while the expanded hand translated dictionary is competitive, as shown in Figure 3 .",
"While for some tasks, e. g., bilingual lexicon induction, using the most frequent words as translation pairs is an effective approach, for sentiment analysis, this does not seem to help. Using a translated sentiment lexicon, even if it is small, gives better results.",
"The main motivation for using two projection matrices $M$ and $M^{\\prime }$ is to allow the original embeddings to remain stable, while the projection matrices have the flexibility to align translations and separate these into distinct sentiment subspaces. To justify this design decision empirically, we perform an experiment to evaluate the actual need for the target language projection matrix $M^{\\prime }$ : We create a simplified version of our model without $M^{\\prime }$ , using $M$ to project from the source to target and then $P$ to classify sentiment.",
"The results of this model are shown in Figure 4 . The modified model does learn to predict in the source language, but not in the target language. This confirms that $M^{\\prime }$ is necessary to transfer sentiment in our model.",
"Additionally, we provide an analysis of a similar model to ours, but which uses $M = \\mathbb {R}^{d, o}$ and $M^{\\prime } = \\mathbb {R}^{d^{\\prime }, o}$ , where $d$ ( $d^{\\prime }$ ) is the dimensionality of the original embeddings and $o$ is the label size, to directly model crosslingual sentiment, such that the final objective function is ",
"$$J =\\hspace{-14.22636pt}\\sum _{(x,y) \\in C_{\\textrm {source}}}\\hspace{2.84526pt}\\sum _{(s,t) \\in L}\\hspace{0.0pt}\\alpha \\cdot H(x, y) + (1 - \\alpha ) \\cdot || M \\cdot s - M^{\\prime } \\cdot t ||$$ (Eq. 66) ",
"thereby simplifying the model and removing the $P$ parameter. Table 6 shows that Blse outperforms this simplified model on all tasks.",
"In order to understand how well our model transfers sentiment information to the target language, we perform two qualitative analyses. First, we collect two sets of 100 positive sentiment words and one set of 100 negative sentiment words. An effective cross-lingual sentiment classifier using embeddings should learn that two positive words should be closer in the shared bilingual space than a positive word and a negative word. We test if Blse is able to do this by training our model and after every epoch observing the mean cosine similarity between the sentiment synonyms and sentiment antonyms after projecting to the joint space.",
"We compare Blse with VecMap and Barista by replacing the Linear SVM classifiers with the same multi-layer classifier used in Blse and observing the distances in the hidden layer. Figure 5 shows this similarity in both source and target language, along with the mean cosine similarity between a held-out set of translation pairs and the macro $\\text{F}_1$ scores on the development set for both source and target languages for Blse, Barista, and VecMap. From this plot, it is clear that Blse is able to learn that sentiment synonyms should be close to one another in vector space and antonyms should have a negative cosine similarity. While the other models also learn this to some degree, jointly optimizing both sentiment and projection gives better results.",
"Secondly, we would like to know how well the projected vectors compare to the original space. Our hypothesis is that some relatedness and similarity information is lost during projection. Therefore, we visualize six categories of words in t-SNE, which projects high dimensional representations to lower dimensional spaces while preserving the relationships as best as possible Vandermaaten2008: positive sentiment words, negative sentiment words, functional words, verbs, animals, and transport.",
"The t-SNE plots in Figure 6 show that the positive and negative sentiment words are rather clearly separated after projection in Blse. This indicates that we are able to incorporate sentiment information into our target language without any labeled data in the target language. However, the downside of this is that functional words and transportation words are highly correlated with positive sentiment.",
"Finally, in order to analyze the sensitivity of the alpha parameter, we train Blse models for 30 epochs each with $\\alpha $ between 0 and 1. Figure 7 shows the average cosine similarity for the translation pairs, as well as macro $\\text{F}_1$ for both source and target language development data.",
"Values near 0 lead to poor translation and consecuently poor target language transfer. There is a rather large “sweet spot” where all measures perform best and finally, the translation is optimized to the detriment of sentiment prediction in both source and target languages with values near 1.",
"The experiments in this section have proven that it is possible to perform cross-lingual sentiment analysis without machine translation, and that jointly learning to project and predict sentiment is advantageous. This supports the growing trend of jointly training for multiple objectives Tang2014,Klinger2015,Ferreira2016.",
"This approach has also been exploited within the framework of multi-task learning, where a model learns to perform multiple similar tasks in order to improve on a final task Collobert2011a. The main difference between the joint method proposed here and multi-task learning is that vector space projection and sentiment classification are not similar enough tasks to help each other. In fact, these two objectives compete against one another, as a perfect projection would not contain enough information for sentiment classification, and vice versa."
],
[
"Table 7 shows the macro $\\text{F}_1$ scores for all cross-lingual approaches (Blse, VecMap, Muse, Barista, MT, Unsup) and all targeted approaches (Sent, Split, Context-only, and Target-only). The final column is the average over all corpora. The final row in each setup shows the macro $\\text{F}_1$ for a classifier that always chooses the majority class.",
"Blse outperforms other projection methods on the binary setup, 63.0 macro averaged $\\text{F}_1$ across corpora versus 59.0, 57.9, and 51.4 for VecMap, Muse, and Barista, respectively. On the multiclass setup, however, Muse (32.2 $\\text{F}_1$ ) is the best, followed by VecMap (31.0), Barista (28.1) and Blse (23.7). Unsup performs well across all experiments, achieving the best results on OpeNER ES (73.2 on binary and 42.7 on multiclass) and SemEval binary (77.1). VecMap is never the best nor the worst approach. In general, Barista performs poorly on the binary setup, but slightly better on the multiclass, although the overall performance is still weak. These results are similar to those observed in Experiment 1 for sentence classification.",
"The Split approach to ABSA improves over the Sent baseline on 33 of 50 experiments, especially on binary (21/25), while on multiclass it is less helpful (13/25). Both Sent and Split normally outperform Context-only or Target-only approaches. This confirms the intuition that it is important to take both context and target information for classification. Additionally, the Context-only approach always performs better than Target-only, which indicates that context is more important than the prior probability of an target being positive or negative.",
"Unlike the projection methods, MT using only the Sent representation performs well on the OpeNER and MultiBooked datasets, while suffering more on the SemEval and USAGE datasets. This is explained by the percentage of sentences that contain contrasting polarities in each dataset: between 8 and 12% for the OpeNER and Multibooked datasets, compared to 29% for SemEval or 50% for USAGE. In sentences with multiple contrasting polarities, the Sent baseline performs poorly.",
"Finally, the general level of performance of projection-based targeted cross-lingual sentiment classification systems shows that they still lag 10+ percentage points behind MT on binary (compare MT (72.9 $\\text{F}_1$ ) with Blse (63.0)), and 6+ percentage points on multiclass (MT (38.8) versus Muse (32.2)). The gap between MT and projection-based approaches is therefore larger on targeted sentiment analysis than at sentence-level.",
"We perform a manual analysis of the targets misclassified by all systems on the OpeNER Spanish binary corpus (see Table 8 ), and found that the average length of misclassified targets is slightly higher than that of correctly classified targets, except for with VecMap. This indicates that averaging may have a detrimental effect as the size of the targets increases.",
"With the MT upperbounds, there is a non-negligible amount of noise introduced by targets which have been incorrectly translated (0.05% OpeNER ES, 6% MultiBooked EU, 2% CA, 2.5% SemEval, 1% USAGE). We hypothesize that this is why MT with Context-only performs better than MT with Split. This motivates further research with projection-based methods, as they do not suffer from translation errors.",
"The confusion matrices of the models on the SemEval task, shown in Figure 8 , show that on the multilabel task, models are not able to learn the neutral class. This derives from the large class imbalance found in the data (see Table 3 ). Similarly, models do not learn the Strong Negative class on the OpeNER and MultiBooked datasets."
],
[
"The performance of machine learning models on different target languages depends on the amount of data available, the quality of the data, and characteristics of the target language, e. g., morphological complexity. In the following, we analyze these aspects. There has been previous work that has observed target-language specific differences in multilingual dependency parsing Zeljko2016, machine translation Johnson2017, and language modeling Cotterell2018,Gerz2018. We are not aware of any work in cross-lingual sentiment analysis that explores the relationship between target language and performance in such depth and aim at improving this situation in the following.",
"Additionally, the effect of domain differences when performing cross-lingual tasks has not been studied in depth. Hangya2018 propose domain adaptation methods for cross-lingual sentiment classification and bilingual dictionary induction. They show that creating domain-specific cross-lingual embeddings improves the classification for English-Spanish. However, the source-language training data used to train the sentiment classifier is taken from the same domain as the target-language test data. Therefore, it is not clear what the effect of using source-language training data from different domains would be. We analyzed the model presented in Section \"Sentence-level Model\" in a domain adaptation setup, including the impact of domain differences Barnes2018c. The main result was that our model performs particularly well on more distant domains, while other approaches Chen2012,Ziser2017 performed better when the source and target domains were not too dissimilar.",
"In the following, we transfer this analysis to the target-based projection model in a real-world case study which mimics a user searching for the sentiment on touristic attractions. In order to analyze how well these methods generalize to new languages and domains, we deploy the targeted Blse, Muse, VecMap and MT models on tweets in ten Western European languages with training data from three different domains. Additionally, we include experiments with the Unsup models for a subset of the languages. English is the source language in all experiments, and we test on each of the ten target languages and attempt to answer the following research questions:",
"How much does the amount of monolingual data available to create the original embeddings effect the final results?",
"How do features of the target language, i. e. similarity to source language or morphological complexity, affect the performance?",
"How do domain mismatches between source-language training and target-language test data affect the performance?",
"Section \"Discussion\" addresses our findings regarding these questions and demonstrates that 1) the amount of monolingual data does not correlate with classification results, 2) language similarity between the source and target languages based on word and character n-gram distributions predicts the performance of Blse on new datasets, and 3) domain mismatch has more of an effect on the multiclass setup than binary."
],
[
"We collect tweets directed at a number of tourist attractions in European cities using the Twitter API in 10 European languages, including several under-resourced languages (English, Basque, Catalan, Galician, French, Italian, Dutch, German, Danish, Swedish, and Norwegian). We detail the data collection and annotation procedures in Section UID85 . For classification, we compare MT the best performing projection-based methods (Blse, Muse, VecMap) using the Split method, detailed further in Section UID94 . As we need monolingual embeddings for all projection-based approaches, we create skipgram embeddings from Wikipedia dumps, detailed in Section UID91 .",
"As an experimental setting to measure the effectiveness of targeted cross-lingual sentiment models on a large number of languages, we collect and annotate small datasets from Twitter for each of the target languages, as well as a larger dataset to train the models in English. While it would be possible to only concentrate our efforts on languages with existing datasets in order to enable evaluation, this could give a distorted view of how well these models generalize. In order to reduce the possible ambiguity of the tourist attractions, we do not include those that have two or more obvious senses, e. g., Barcelona could refer either to the city or the football team.",
"In order to obtain a varied sample of tweets with subjective opinions, we download tweets that contain mentions of these tourist attractions as well as one of several emoticons or keywords. This distant supervision technique has been used to create sentiment lexicons Mohammad2016, semi-supervised training data Felbo2017, and features for a classifier Turney2003. We then remove any tweets that are less than 7 words long or which contain more than 3 hashtags or mentions. This increases the probability that a tweet text contains sufficient information for our use case setting.",
"We manually annotate all tweets for its polarity toward the target to insure the quality of the data. Note that we only annotate the sentiment towards the predefined list of targets, which leads to a single annotated target per tweet. Any tweets that have unclear polarity towards the target are assigned a neutral label. This produces the three class setup that is commonly used in the SemEval tasks Nakov2013,Nakov2016. Annotators were master's and doctoral students between 27 and 35 years old. All had either native or C1 level fluency in the languages of interest. Finally, for a subset of tweets in English, Catalan, and Basque two annotators classify each tweet. Table 11 shows three example tweets from English.",
"Table 10 depicts the number of annotated targets for all languages, as well as inter-annotator agreement using Cohen's $\\kappa $ . The neutral class is the largest in all languages, followed by positive, and negative. These distributions are similar to those found in other Twitter crawled datasets Nakov2013,Nakov2016. We calculate pairwise agreement on a subset of languages using Cohen's $\\kappa $ . The scores reflect a good level of agreement (0.62, 0.60, and 0.61 for English, Basque, and Catalan, respectively).",
"We collect Wikipedia dumps for ten languages; namely, Basque, Catalan, Galician, French, Italian, Dutch, German, Danish, Swedish, and Norwegian. We then preprocess them using the Wikiextractor script, and sentence and word tokenize them with either IXA pipes Agerri2014 (Basque, Galician, Italian, Dutch, and French), Freeling Padro2010 (Catalan), or NLTK Loper2002 (Norwegian, Swedish, Danish).",
"For each language we create Skip-gram embeddings with the word2vec toolkit following the pipeline and parameters described in Section UID42 . This process gives us 300 dimensional vectors trained on similar data for all languages. We assume that any large differences in the embedding spaces derive from the size of the data and the characteristics of the language itself. Following the same criteria laid out in Section UID46 , we create projection dictionaries by translating the Hu and Liu dictionary HuandLiu2004 to each of the target languages and keeping only translations that are single word to single word. The statistics of all Wikipedia corpora, embeddings, and projection dictionaries are shown in Table 12 .",
"Since we predetermine the sentiment target for each tweet, we can perform targeted experiments without further annotation. We use the Split models described in Section \"Targeted Model\" . Our model is the targeted Blse models described in Section \"Targeted Model\" . Additionally, we compare to the targeted Muse, VecMap, and MT models, as well as an Ensemble classifier that uses the predictions from Blse and MT before taking the largest predicted class for classification (see Section \"Setting for Experiment 1: Sentence-level Classification\" for details). Finally, we set a majority baseline by assigning the most common label (neutral) to all predictions. All models are trained for 300 epochs with a learning rate of 0.001 and $\\alpha $ of 0.3.",
"We train the five models on the English data compiled during this study, as well as on the USAGE, and SemEval English data (the details can be found in Table 3 ) and test the models on the target-language test set."
],
[
"Table 13 shows the macro $\\text{F}_1$ scores for all cross-lingual targeted sentiment approaches (Blse, Muse, VecMap, MT) trained on English data and tested on the target-language using the Split method proposed in \"Targeted Model\" . The final column is the average over all languages. Given the results from the earlier experiments, we hypothesize that MT should outperform Muse, VecMap and Blse for most of the languages.",
"On the binary setup, Blse outperforms all other cross-lingual methods including MT and Unsup, with 56.0 macro averaged $\\text{F}_1$ across languages versus 48.7, 49.4, and 48.9 for Muse, VecMap, and MT respectively (54.1 across Basque and Catalan versus 46.0 for Unsup). Blse performs particularly well on Catalan (54.5), Italian (63.4), Swedish (65.3), and Danish (68.3). VecMap performs poorly on Galician (33.3), Italian (38.2), and Danish (43.4), but outperforms all other methods on Basque (56.4), Dutch (55.2) and Norwegian (59.0). MT performs worse than Blse and VecMap, although it does perform best for Galician (56.5). Unlike experiments in Section \"Sentence-level Model\" , the ensemble approach does not perform better than the individual classifiers and Muse leads to the classifier with the lowest performance overall. Unsup performs better than MT on both Basque and Catalan.",
"On the multiclass setup, however, MT (36.6 $\\text{F}_1$ ) is the best, followed by VecMap (34.1), Blse (32.6), and Muse (26.1). Compared to the experiments on hotel reviews, the average differences between models is small (2.5 percentage points between MT and VecMap, and 1.5 between VecMap and Blse). Unsup performs better than MT on Basque (40.1), but worse on Catalan (28.5). Again, all methods outperform the majority baseline.",
"On both the binary and multiclass setups, the best overall results are obtained by testing and training on data from the same domain (56.0 $\\text{F}_1$ for Blse and 36.6 $\\text{F}_1$ for MT). Training MT, Muse, and VecMap on the SemEval data performs better than training on USAGE, however.",
"An initial error analysis shows that all models suffer greatly on the negative class. This seems to suggest that negative polarity towards a target is more difficult to determine within these frameworks. A significant amount of the tweets that have negative polarity towards a target also express positive or neutral sentiment towards other targets. The averaging approach to create the context vectors does not currently allow any of the models to exclude this information, leading to poor performance on these instances.",
"Finally, compared to the experiments performed on hotel and product reviews in Section \"Experiments\" , the noisy data from Twitter is more difficult to classify. Despite the rather strong majority baseline (an average of 40.5 Macro $\\text{F}_1$ on binary), no model achieves more than an average of 56 Macro $\\text{F}_1$ on the binary task. A marked difference is that Blse and VecMap outperform MT on the binary setup. Unlike the previous experiment, Muse performs the worst on the multiclass setup. The other projection methods obtain multiclass results similar to the previous experiment (32.6–34.1 $\\text{F}_1$ here compared to 23.7–31.0 $\\text{F}_1$ previously)."
],
[
"In this section, we present an error analysis. Specifically, Table 14 shows examples where Blse correctly predicts the polarity of a tweet that MT and Unsup incorrectly predict, and vice versa, as well as examples where all models are incorrect.",
"In general, in examples where Blse outperforms MT and Unsup, the translation-based approaches often mistranslate important sentiment words, which leads to prediction errors. In the first Basque tweet, for example, “#txindoki igo gabe ere inguruaz goza daiteke... zuek joan tontorrera eta utzi arraroei gure kasa...”, Unsup incorrectly translates the most important sentiment word in the tweet “goza” (enjoy) to “overlook” and subsequently incorrectly predicts that the polarity towards txindoki is negative.",
"Tweets that contain many out-of-vocabulary words or non-standard spelling (due to dialectal differences, informal writing, etc.), such as the third tweet in Table 14 , “kanpora jun barik ehko asko: anboto, txindoki”, are challenging for all models. In this example “jun” is a non-standard spelling of “joan” (go), “barik” is a Bizcayan Basque variant of “gabe” (without) , and “ehko” is an abbreviation of “Euskal Herriko” (Basque Country's). These lead to poor translations for MT and Unsup, but pose a similar out-of-vocabulary problem for Blse.",
"In order to give a more qualitative view of the targeted model, Figure 9 shows t-sne projections of the bilingual vector space before and after training on the Basque binary task, following the same proceedure mentioned in Section UID68 . As in the sentence-level experiment, there is a separation of the positive and negative sentiment words, although it is less clear for targeted sentiment. This is not surprising, as a targeted model must learn not only the prior polarity of words, but how they interact with targets, leading to a more context-dependent representation of sentiment words.",
"Finally, we further analyze the effects of three variables that are present in cross-lingual sentiment analysis: a) availability of monolingual unlabeled data, b) similarity of source and target languages, and c) domain shift between the source language training data and the target language test data.",
"We pose the question of what the relationship is between the amount of available monolingual data to create the embedding spaces and the classification results of the models. If the original word embedding spaces are not of high quality, this could make it difficult for the projection-based models to create useful features. In order to test this, we perform ablation experiments by training target-language embeddings on varying amounts of data ( $1 \\times 10^{4}$ to $5 \\times 10^{9}$ tokens) and testing the models replacing the full target-language embeddings with these. We plot the performance of the models as a function of available monolingual data in Figure 10 .",
"Figure 10 shows that nearly all models, with the exception of Norwegian, perform poorly with very limited monolingual training data ( $1\\times 10^{4}$ ) and improve, although erratically, with more training data. Interestingly, the models require little data to achieve results comparable to using the all tokens to train the embeddings. A statistical analysis of the amount of unlabeled data available and the performance of Blse, Muse, VecMap (Pearson's $r$ = $-0.14$ , $-0.27$ , $0.08$ , respectively) reveals no statistically significant correlation between them. This seems to indicate that all models are not sensitive to the amount of monolingual training data available in the target language.",
"One hypothesis to different results across languages is that the similarity of the source and target language has an effect on the final classification of the models. In order to analyze this, we need a measure that models pairwise language similarity. Given that the features we use for classification are derived from distributional representations, we model similarity as a function of 1) universal POS-tag n-grams which represent the contexts used during training, and 2) character n-grams, which represent differences in morphology. POS-tag n-grams have previously been used to classify genre Fang2010, improve statistical machine translation Lioma2005, and the combination of POS-tag and character n-grams have proven useful features for identifying the native language of second language writers in English Kulmizev2017. This indicates that these are useful features for characterizing a language. In this section we calculate the pairwise similarity between all languages and then check whether this correlates with performance.",
"After POS-tagging the test sentences obtained from Twitter using the universal part of speech tags Petrov2012, we calculate the normalized frequency distribution $P_{l}$ for the POS-tag trigrams and $C_{l}$ for character trigrams for each language $l$ in $L =\n\\lbrace \\textrm {Danish, Swedish, Norwegian, Italian, Basque, Catalan,\nFrench, Dutch, Galician,}$ ",
" $\\textrm {German, English}\\rbrace $ . We then compute the pairwise cosine similarity between $\\cos (A, B) = \\frac{A\n\\cdot B}{||A|| \\: ||B||} $ where $A$ is the concatenation of $P_{l_{i}}$ and $C_{l_{i}}$ for language $l_{i}$ and $B$ is the concatenation of $P_{l_{j}}$ and $C_{l_{j}}$ for language $l_{j}$ .",
"The pairwise similarities in Figure 11 confirm to expected similarities, and language families are clearly grouped (Romance, Germanic, Scandinavian, with Basque as an outlier that has no more than 0.47 similarity with any language). This confirms the use of our similarity metric for our purposes. We plot model performance as a function of language similarity in Figure 12 . To measure the correlation between language similarity and performance, we calculate Pearson's $r$ and find that for Blse there is a strong correlation between language similarity and performance, $r = 0.76$ and significance $p <\n0.01$ . Muse, VecMap and MT do not show these correlations ( $r$ = 0.41, 0.24, 0.14, respectively). For MT this may be due to robust machine translation available in less similar languages according to our metric, e. g., German-English. For Muse and VecMap, however, it is less clear why it does not follow the same trend as Blse.",
"In this section, we determine the effect of source-language domain on the cross-lingual sentiment classification task. Specifically, we use English language training data from three different domains (Twitter, restaurant reviews, and product reviews) to train the cross-lingual classifiers, and then test on the target-language Twitter data. In monolingual sentiment analysis, one would expect to see a drop when moving to more distant domains.",
"In order to analyze the effect of domain similarity further, we test the similarity of the domains of the source-language training data using Jensen-Shannon Divergence, which is a smoothed, symmetric version of the Kullback-Leibler Divergence, $D_{KL}(A||B) = \\sum _{i}^{N} a_{i} \\log \\frac{a_{i}}{b_{i}}$ . Kullback-Leibler Divergence measures the difference between the probability distributions $A$ and $B$ , but is undefined for any event $a_{i} \\in A$ with zero probability, which is common in term distributions. Jensen-Shannon Divergence is then $\nD_{JS}(A,B) = \\frac{1}{2} \\Big [ D_{KL}(A||B) + D_{KL}(B||A) \\Big ]\\,.\n$ ",
"Our similarity features are probability distributions over terms $t\n\\in \\mathbb {R}^{|V|}$ , where $t_{i}$ is the probability of the $i$ -th word in the vocabulary $V$ . For each domain, we create frequency distributions of the most frequent 10,000 unigrams that all domains have in common and measure the divergence with $D_{JS}$ .",
"The results shown in Table 15 indicate that both the SemEval and USAGE datasets are relatively distinct from the Twitter data described in Section UID85 , while they are more similar to each other. Additionally, we plot the results of all models with respect to the training domain in Figure 13 .",
"We calculate Pearson's $r$ on the correlation between domain and model performance, shown in Table 16 . On the binary setup, the results show a negligible correlation for Blse (0.32), with no significant correlation for Muse, VecMap or MT. This suggests that the models are relatively robust to domain noise, or rather that there is so much other noise found in the approaches that domain is less relevant. On the multiclass setup, however, there is a significant effect for all models. This indicates that the multiclass models presented here are less robust than the binary models.",
"Both the SemEval and USAGE corpora differ equally from the Twitter data given the metric defined here. The fact that models trained on SemEval tend to perform better than those trained on USAGE, therefore, seems to be due to the differences in label distribution, rather than to differences in domain. These label distributions are radically different in the multiclass setup, as the English Twitter data has a 30/50/20 distribution over Positive, Neutral, and Negative labels (67/1/32 and 68/4/28 for USAGE and SemEval, respectively). Both undersampling and oversampling help, but the performance is still worse than training on in-domain data.",
"The case study which we presented in this section showed results of deploying the models from Section \"Projecting Sentiment Across Languages\" to real world Twitter data, which we collect and annotate for targeted sentiment analysis. The analysis of different phenomena revealed that for binary targeted sentiment analysis, Blse performs better than machine translation on noisy data from social media, although it is sensitive to differences between source and target languages. Finally, there is little correlation between performance on the cross-lingual sentiment task and the amount of unlabeled monolingual data used to create the original embeddings spaces which goes against our expectations.",
"Unlike the experiments in Section \"Sentence-level Model\" , the ensemble classifier employed here was not able to improve the results. We assume that the small size of the datasets in this experiment does not enable the classifier to learn which features are useful in certain contexts.",
"One common problem that appears when performing targeted sentiment analysis on noisy data from Twitter is that many of the targets of interest are ambiguous, which leads to false positives. Even with relatively unambiguous targets like “Big Ben”, there are a number of entities that can be referenced; Ben Rothlisberger (an American football player), an English language school in Barcelona, and many others. In order to deploy a full sentiment analysis system on Twitter data, it will be necessary to disambiguate these mentions before classifying the tweets, either as a preprocessing step or jointly.",
"In sentiment analysis, it is not yet common to test a model on multiple languages, despite the fact that current state-of-the-art models are often theoretically language-agnostic. This section shows that good performance in one language does not guarantee that a model transfers well to other languages, even given similar resources. We hope that future work in sentiment analysis will make better use of the available test datasets."
],
[
"With this article, we have presented a novel projection-based approach to targeted cross-lingual sentiment analysis. The central unit of the proposed method is Blse which enables the transfer of annotations from a source language to a non-annotated target language. The only input it relies on are word embeddings (which can be trained without manual labeling by self-annotation) and a comparably small translation dictionary which connects the semantics of the source and the target language.",
"In the binary classification setting (automatic labeling of sentences or documents), Blse constitutes a novel state of the art on several language and domain pairs. For a more fine-grained classification to four sentiment labels, Barista and Muse perform slightly better. The predictions in all settings are complementary to the strong upper bound of employing machine translations: in an ensemble, even this resource-intense approach is inferior.",
"The transfer from classification to target-level analysis revealed additional challenges. The performance is lower, particularly for the 4-class setting. Our analyses show that mapping of sentence predictions to the aspects mentioned in each sentence with a machine translation model is a very challenging empirical upper bound – the difference in performance compared to projection-based methods is greater here than for the sentence-classification setting. However, we showed that in resource-scarce environments, Blse constitutes the current state of the art for binary target-level sentiment analysis when incorporated in a deep learning architecture which is informed about the aspect. Muse performs better in the same architecture for the 4-class setting.",
"Our analysis further showed that the neural network needs to be informed about both the aspect and the context – limiting the information to a selection of these sentence parts strongly underperforms the combined setting. That also demonstrates that the model does not rely on prior distributions of aspect mentions.",
"The final experiment in the paper is a real-world deployment of the target-level sentiment analysis system in multilingual setting with 10 languages, where the assumption is that the only supervision is available in English (which is not part of the target languages). We learned here that it is important to have access to in-domain data (even for cross-lingual projection), especially in the multiclass setting. Binary classification however, which might often be sufficient for real-world applications, is more robust to domain changes. Further, machine translation is less sensitive to language dissimilarities, unlike projection-based methods. The amount of available unlabeled data to create embeddings plays a role in the final performance of the system, although only to a minor extent.",
"The current performance of the projection-based techniques still lags behind state-of-the-art MT approaches on most tasks, indicating that there is still much work to be done. While general bilingual embedding techniques do not seem to incorporate enough sentiment information, they are able to retain the semantics of their word vectors to a large degree even after projection. We hypothesize that the ability to retain the original semantics of the monolingual spaces leads to Muse performing better than MT on multiclass targeted sentiment analysis. The joint approach introduced in this work suffers from the degradation of the original semantics space, while optimizing the sentiment information. Moving from a similarity-based loss to a ranking loss, where the model must predict a ranked list of most similar translations could improve the model, but would require further resource development cross-lingually, as a simple bilingual dictionary would not provide enough information.",
"One problem that arises when using bilingual embeddings instead of machine translation is that differences in word order are no longer handled BIBREF2 . Machine translation models, on the other hand, always include a reordering element. Nonetheless, there is often a mismatch between the real source language word order and the translated word order. In this work, we avoided the problem by using a bag-of-embeddings representation, but Barnes2017 found that the bag-of-embeddings approach does not perform as well as approaches that take word order into account, e. g., Lstms or Cnns. We leave the incorporation of these classifiers into our framework for future work.",
"Unsupervised machine translation Artetxe2018,Lample2018,artetxe2018emnlp shows great promise for sentence-level classification. Like MT, however, it performs worse on noisy data, such as tweets. Therefore, users who want to apply targeted cross-lingual approaches to noisy data should consider currently consider using embedding projection methods, such as Blse. Future work on adapting unsupervised machine translation to noisy text may provide another solution for low-resource NLP.",
"The authors thank Patrik Lambert, Toni Badia, Amaia Oliden, Itziar Etxeberria, Jessie Kief, Iris Hübscher, and Arne Øhm for helping with the annotation of the resources used in this research. This work has been partially supported by the DFG Collaborative Research Centre SFB 732 and a SGR-DTCL Predoctoral Scholarship."
]
]
}
|
{
"question": [
"what baseline do they compare to?"
],
"question_id": [
"9c4ed8ca59ba6d240f031393b01f634a9dc3615d"
],
"nlp_background": [
"two"
],
"topic_background": [
"unfamiliar"
],
"paper_read": [
"somewhat"
],
"search_query": [
""
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"VecMap",
"Muse",
"Barista"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We compare Blse (Sections UID23 – UID30 ) to VecMap, Muse, and Barista (Section \"Previous Work\" ) as baselines, which have similar data requirements and to machine translation (MT) and monolingual (Mono) upper bounds which request more resources. For all models (Mono, MT, VecMap, Muse, Barista), we take the average of the word embeddings in the source-language training examples and train a linear SVM. We report this instead of using the same feed-forward network as in Blse as it is the stronger upper bound. We choose the parameter $c$ on the target language development set and evaluate on the target language test set."
],
"highlighted_evidence": [
"We compare Blse (Sections UID23 – UID30 ) to VecMap, Muse, and Barista (Section \"Previous Work\" ) as baselines, which have similar data requirements and to machine translation (MT) and monolingual (Mono) upper bounds which request more resources."
]
}
],
"annotation_id": [
"0f3e76f2f87e107765340c4bffef80796aee7322"
],
"worker_id": [
"2b413669fd1e681656c8d43a27df86e649065edf"
]
}
]
}
|
{
"caption": [
"Figure 1: Bilingual Sentiment Embedding Model (Blse)",
"Figure 2: The Split adaptation of our Blse model to targeted sentiment analysis. At test time, we replace the matrix M with the matrix M ′ .",
"Table 1: Statistics for the OpeNER English (EN) and Spanish (ES) as well as the MultiBooked Catalan (CA) and Basque (EU) datasets.",
"Table 2: Statistics for the Wikipedia corpora and monolingual vector spaces.",
"Table 3: Number of aspect-polarity tuples for the targeted datasets.",
"Table 4: Macro F1 of four models trained on English and tested on Spanish (ES), Catalan (CA), and Basque (EU). The bold numbers show the best results for each metric per column and the highlighted numbers show where Blse is better than the other projection methods, VecMap, Muse, and Barista (* p < 0.01).",
"Table 5: Error analysis for different phenomena for the binary (bi) and multi-class (4) setups. See text for explanation of error classes.",
"Figure 3: Macro F1 for translation pairs in the Spanish 4-class setup. Training with the expanded hand translated lexicon and machine-translated Hu and Liu lexicon gives a macro F1 that grows constantly with the number of translation pairs. Despite having several times more training data, the Apertium and NRC translation dictionaries do not perform as well.",
"Figure 4: Blse model (solid lines) compared to a variant without target language projection matrix M ′ (dashed lines). “Translation” lines show the average cosine similarity between translation pairs. The remaining lines show F1 scores for the source and target language with both variants of Blse. The modified model cannot learn to predict sentiment in the target language (red lines). This illustrates the need for the second projection matrix M ′.",
"Table 6: An empirical comparison of Blse and a simplified model which directly projects the embeddings to the sentiment classes. Blse outperforms the simplified model on all tasks.",
"Figure 5: Average cosine similarity between a subsample of translation pairs of same polarity (“sentiment synonyms”) and of opposing polarity (“sentiment antonyms”) in both target and source languages in each model. The x-axis shows training epochs. We see that Blse is able to learn that sentiment synonyms should be close to one another in vector space and sentiment antonyms should not.",
"Figure 6: t-SNE-based visualization of the Spanish vector space before and after projection with Blse. There is a clear separation of positive and negative words after projection, despite the fact that we have used no labeled data in Spanish.",
"Figure 7: An analysis of the α parameter of Blse showing cosine similarity of translation pairs and macro F1 for source and target development data. The optimal values range from 1× 10−6 to 1× 10−3.",
"Table 7: Macro F1 results for all corpora and techniques. We denote the best performing",
"Table 8: Average length of tokens of correctly and incorrectly classified targets on the OpeNER Spanish binary corpus.",
"Figure 8: Confusion matrices for all Split models on the SemEval task.",
"Table 9: Touristic targets used as tweet search criteria.",
"Table 10: Statistics of Tweet corpora collected for the deployment study, as well as interannotator agreement for English, Basque, and Catalan calculated with Cohen’s κ.",
"Table 11: Three example tweets in English. The underlined phrases are the targets.",
"Table 12: Statistics of Wikipedia corpora, embeddings, and projection dictionaries (M denotes million, k denotes thousand).",
"Table 13: Macro F1 of targeted cross-lingual models on Twitter data in 10 target languages. Twitter refers to models that have been trained on the English data mentioned in Table 10, while USAGE and SemEval are trained on the English data from the datasets mentioned in Section 4.1.2.",
"Figure 9: t-SNE-based visualization of the Basque vector space before and after projection with the targeted Blse. The positive and negative sentiment words are separated, although it is less clearly defined at target-level.",
"Table 14: Examples where Blse is better and worse than MT and Unsup. We show the original tweet in Blse, the automatic translation in MT and Unsup, and reference translations (Ref.). The label column shows the prediction of each model and the reference",
"Figure 10: Performance of Blse (Macro F1) on the binary sentiment task with training and test on Twitter as a function of amount of monolingual data available to train the monolingual embeddings in each language.",
"Figure 11: Cosine similarity of 3-gram POS-tag and 3-gram character frequency.",
"Figure 12: Performance (Macro F1) on the binary task as a function of cosine similarity between POS-tag and character trigram distributions in the source language (EN) and the target languages.",
"Figure 13: Performance of all models (Macro F1) on the binary and multiclass task when trained on different source language data. For each target language, we show a boxplot for all models trained on In-domain Twitter data (light green), USAGE product reviews (light blue), and SemEval restaurant reviews (pink). In the multiclass setup, we can see the in-domain data gives better results than the out-of-domain training data. This trend is not found in the binary setup, suggesting that binary classification is more robust to domain changes than multiclass classification.",
"Table 15: Domain similarity of English training data measured as Jennson-Shannon divergence between the most common 10,000 unigrams.",
"Table 16: Pearson’s r and p values for correlations between domain and performance of each model. On the binary setup, there is no statistically significant effect of domain, while on the multiclass setup, all results are statistically significant (p > 0.01, with Pearson’s r )."
],
"file": [
"8-Figure1-1.png",
"10-Figure2-1.png",
"11-Table1-1.png",
"12-Table2-1.png",
"13-Table3-1.png",
"16-Table4-1.png",
"17-Table5-1.png",
"18-Figure3-1.png",
"20-Figure4-1.png",
"20-Table6-1.png",
"21-Figure5-1.png",
"22-Figure6-1.png",
"23-Figure7-1.png",
"24-Table7-1.png",
"25-Table8-1.png",
"26-Figure8-1.png",
"28-Table9-1.png",
"29-Table10-1.png",
"29-Table11-1.png",
"30-Table12-1.png",
"31-Table13-1.png",
"33-Figure9-1.png",
"34-Table14-1.png",
"35-Figure10-1.png",
"36-Figure11-1.png",
"37-Figure12-1.png",
"38-Figure13-1.png",
"38-Table15-1.png",
"39-Table16-1.png"
]
}
|
1905.13413
|
Improving Open Information Extraction via Iterative Rank-Aware Learning
|
Open information extraction (IE) is the task of extracting open-domain assertions from natural language sentences. A key step in open IE is confidence modeling, ranking the extractions based on their estimated quality to adjust precision and recall of extracted assertions. We found that the extraction likelihood, a confidence measure used by current supervised open IE systems, is not well calibrated when comparing the quality of assertions extracted from different sentences. We propose an additional binary classification loss to calibrate the likelihood to make it more globally comparable, and an iterative learning process, where extractions generated by the open IE model are incrementally included as training samples to help the model learn from trial and error. Experiments on OIE2016 demonstrate the effectiveness of our method. Code and data are available at https://github.com/jzbjyb/oie_rank.
|
{
"section_name": [
"Introduction",
"Neural Models for Open IE",
"Problem Formulation",
"Model Architecture and Decoding",
"Iterative Rank-Aware Learning",
"Binary Classification Loss",
"Iterative Learning",
"Experimental Settings",
"Evaluation Results",
"Conclusion",
"Acknowledgements"
],
"paragraphs": [
[
"Open information extraction (IE, sekine2006demand, Banko:2007:OIE) aims to extract open-domain assertions represented in the form of $n$ -tuples (e.g., was born in; Barack Obama; Hawaii) from natural language sentences (e.g., Barack Obama was born in Hawaii). Open IE started from rule-based BIBREF0 and syntax-driven systems BIBREF1 , BIBREF2 , and recently has used neural networks for supervised learning BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 .",
"A key step in open IE is confidence modeling, which ranks a list of candidate extractions based on their estimated quality. This is important for downstream tasks, which rely on trade-offs between the precision and recall of extracted assertions. For instance, an open IE-powered medical question answering (QA) system may require its assertions in higher precision (and consequently lower recall) than QA systems for other domains. For supervised open IE systems, the confidence score of an assertion is typically computed based on its extraction likelihood given by the model BIBREF3 , BIBREF5 . However, we observe that this often yields sub-optimal ranking results, with incorrect extractions of one sentence having higher likelihood than correct extractions of another sentence. We hypothesize this is due to the issue of a disconnect between training and test-time objectives. Specifically, the system is trained solely to raise likelihood of gold-standard extractions, and during training the model is not aware of its test-time behavior of ranking a set of system-generated assertions across sentences that potentially include incorrect extractions.",
"To calibrate open IE confidences and make them more globally comparable across different sentences, we propose an iterative rank-aware learning approach, as outlined in fig:arch. Given extractions generated by the model as training samples, we use a binary classification loss to explicitly increase the confidences of correct extractions and decrease those of incorrect ones. Without adding additional model components, this training paradigm naturally leads to a better open IE model, whose extractions can be further included as training samples. We further propose an iterative learning procedure that gradually improves the model by incrementally adding extractions to the training data. Experiments on the OIE2016 dataset BIBREF8 indicate that our method significantly outperforms both neural and non-neural models."
],
[
"We briefly revisit the formulation of open IE and the neural network model used in our paper."
],
[
"Given sentence $\\mathbf {s}=(w_1, w_2, ..., w_n)$ , the goal of open IE is to extract assertions in the form of tuples $\\mathbf {r}=(\\mathbf {p}, \\mathbf {a}_1, \\mathbf {a}_2, ..., \\mathbf {a}_m)$ , composed of a single predicate and $m$ arguments. Generally, these components in $\\mathbf {r}$ need not to be contiguous, but to simplify the problem we assume they are contiguous spans of words from $\\mathbf {s}$ and there is no overlap between them.",
"Methods to solve this problem have recently been formulated as sequence-to-sequence generation BIBREF4 , BIBREF5 , BIBREF6 or sequence labeling BIBREF3 , BIBREF7 . We adopt the second formulation because it is simple and can take advantage of the fact that assertions only consist of words from the sentence. Within this framework, an assertion $\\mathbf {r}$ can be mapped to a unique BIO BIBREF3 label sequence $\\mathbf {y}$ by assigning $O$ to the words not contained in $\\mathbf {r}$ , $B_{p}$ / $I_{p}$ to the words in $\\mathbf {p}$ , and $B_{a_i}$ / $I_{a_i}$ to the words in $\\mathbf {a}_i$ respectively, depending on whether the word is at the beginning or inside of the span.",
"The label prediction $\\hat{\\mathbf {y}}$ is made by the model given a sentence associated with a predicate of interest $(\\mathbf {s}, v)$ . At test time, we first identify verbs in the sentence as candidate predicates. Each sentence/predicate pair is fed to the model and extractions are generated from the label sequence."
],
[
"Our training method in sec:ours could potentially be used with any probabilistic open IE model, since we make no assumptions about the model and only the likelihood of the extraction is required for iterative rank-aware learning. As a concrete instantiation in our experiments, we use RnnOIE BIBREF3 , BIBREF9 , a stacked BiLSTM with highway connections BIBREF10 , BIBREF11 and recurrent dropout BIBREF12 . Input of the model is the concatenation of word embedding and another embedding indicating whether this word is predicate: $\n\\mathbf {x}_t = [\\mathbf {W}_{\\text{emb}}(w_t), \\mathbf {W}_{\\text{mask}}(w_t = v)].\n$ ",
"The probability of the label at each position is calculated independently using a softmax function: $\nP(y_t|\\mathbf {s}, v) \\propto \\text{exp}(\\mathbf {W}_{\\text{label}}\\mathbf {h}_t + \\mathbf {b}_{\\text{label}}),\n$ ",
"where $\\mathbf {h}_t$ is the hidden state of the last layer. At decoding time, we use the Viterbi algorithm to reject invalid label transitions BIBREF9 , such as $B_{a_2}$ followed by $I_{a_1}$ .",
"We use average log probability of the label sequence BIBREF5 as its confidence: ",
"$$c(\\mathbf {s}, v, \\hat{\\mathbf {y}}) = \\frac{\\sum _{t=1}^{|\\mathbf {s}|}{\\log {P(\\hat{y_t}|\\mathbf {s}, v)}}}{|\\mathbf {s}|}.$$ (Eq. 7) ",
"The probability is trained with maximum likelihood estimation (MLE) of the gold extractions. This formulation lacks an explicit concept of cross-sentence comparison, and thus incorrect extractions of one sentence could have higher confidence than correct extractions of another sentence."
],
[
"In this section, we describe our proposed binary classification loss and iterative learning procedure."
],
[
"To alleviate the problem of incomparable confidences across sentences, we propose a simple binary classification loss to calibrate confidences to be globally comparable. Given a model $\\theta ^\\prime $ trained with MLE, beam search is performed to generate assertions with the highest probabilities for each predicate. Assertions are annotated as either positive or negative with respect to the gold standard, and are used as training samples to minimize the hinge loss: ",
"$$\\hspace{-2.84526pt}\\hat{\\theta } = \\underset{\\theta }{\\operatornamewithlimits{arg\\,min}}\\hspace{-8.53581pt}\\underset{\\begin{array}{c}\\mathbf {s} \\in \\mathcal {D}\\\\ v, \\hat{\\mathbf {y}} \\in g_{\\theta ^\\prime }(\\mathbf {s})\\end{array}}{\\operatorname{\\mathbb {E}}}\\hspace{-11.38109pt}\\max {(0,1-t \\cdot c_{\\theta }(\\mathbf {s}, v, \\hat{\\mathbf {y}}))},$$ (Eq. 9) ",
"where $\\mathcal {D}$ is the training sentence collection, $g_{\\theta ^\\prime }$ represents the candidate generation process, and $t \\in \\lbrace 1,-1\\rbrace $ is the binary annotation. $c_{\\theta }(\\mathbf {s}, v, \\hat{\\mathbf {y}})$ is the confidence score calculated by average log probability of the label sequence.",
"The binary classification loss distinguishes positive extractions from negative ones generated across different sentences, potentially leading to a more reliable confidence measure and better ranking performance."
],
[
"Compared to using external models for confidence modeling, an advantage of the proposed method is that the base model does not change: the binary classification loss just provides additional supervision. Ideally, the resulting model after one-round of training becomes better not only at confidence modeling, but also at assertion generation, suggesting that extractions of higher quality can be added as training samples to continue this training process iteratively. The resulting iterative learning procedure (alg:iter) incrementally includes extractions generated by the current model as training samples to optimize the binary classification loss to obtain a better model, and this procedure is continued until convergence. [t] training data $\\mathcal {D}$ , initial model $\\theta ^{(0)}$ model after convergence $\\theta $ $t \\leftarrow 0$ # iteration",
" $\\mathcal {E} \\leftarrow \\emptyset $ # generated extractions",
"not converge $\\mathcal {E} \\leftarrow \\mathcal {E} \\cup \\lbrace (\\mathbf {s}, v, \\hat{\\mathbf {y}})|v,\\hat{\\mathbf {y}} \\in g_{\\theta ^{(t)}}(\\mathbf {s}), \\forall \\mathbf {s} \\in \\mathcal {D}\\rbrace $ ",
" $\\theta ^{(t+1)} \\leftarrow \\underset{\\theta }{\\operatornamewithlimits{arg\\,min}}\\hspace{-8.53581pt}\\underset{(\\mathbf {s}, v, \\hat{\\mathbf {y}})\\in \\mathcal {E}}{\\operatorname{\\mathbb {E}}}\\hspace{-8.53581pt}\\max {(0,1-t \\cdot c_{\\theta }(\\mathbf {s}, v, \\hat{\\mathbf {y}}))}$ ",
" $t \\leftarrow t+1$ Iterative learning. "
],
[
"We use the OIE2016 dataset BIBREF8 to evaluate our method, which only contains verbal predicates. OIE2016 is automatically generated from the QA-SRL dataset BIBREF13 , and to remove noise, we remove extractions without predicates, with less than two arguments, and with multiple instances of an argument. The statistics of the resulting dataset are summarized in tab:data.",
"We follow the evaluation metrics described by Stanovsky:2016:OIE2016: area under the precision-recall curve (AUC) and F1 score. An extraction is judged as correct if the predicate and arguments include the syntactic head of the gold standard counterparts.",
"We compare our method with both competitive neural and non-neural models, including RnnOIE BIBREF3 , OpenIE4, ClausIE BIBREF2 , and PropS BIBREF14 .",
"Our implementation is based on AllenNLP BIBREF15 by adding binary classification loss function on the implementation of RnnOIE. The network consists of 4 BiLSTM layers (2 forward and 2 backward) with 64-dimensional hidden units. ELMo BIBREF16 is used to map words into contextualized embeddings, which are concatenated with a 100-dimensional predicate indicator embedding. The recurrent dropout probability is set to 0.1. Adadelta BIBREF17 with $\\epsilon =10^{-6}$ and $\\rho =0.95$ and mini-batches of size 80 are used to optimize the parameters. Beam search size is 5."
],
[
"tab:expmain lists the evaluation results. Our base model (RnnOIE, sec:oie) performs better than non-neural systems, confirming the advantage of supervised training under the sequence labeling setting. To test if the binary classification loss (E.q. 9 , sec:ours) could yield better-calibrated confidence, we perform one round of fine-tuning of the base model with the hinge loss ( $+$ Binary loss in tab:expmain). We show both the results of using the confidence (E.q. 7 ) of the fine-tuned model to rerank the extractions of the base model (Rerank Only), and the end-to-end performance of the fine-tuned model in assertion generation (Generate). We found both settings lead to improved performance compared to the base model, which demonstrates that calibrating confidence using binary classification loss can improve the performance of both reranking and assertion generation. Finally, our proposed iterative learning approach (alg:iter, sec:ours) significantly outperforms non-iterative settings.",
"We also investigate the performance of our iterative learning algorithm with respect to the number of iterations in fig:iter. The model obtained at each iteration is used to both rerank the extractions generated by the previous model and generate new extractions. We also report results of using only positive samples for optimization. We observe the AUC and F1 of both reranking and generation increases simultaneously for the first 6 iterations and converges after that, which demonstrates the effectiveness of iterative training. The best performing iteration achieves AUC of 0.125 and F1 of 0.315, outperforming all the baselines by a large margin. Meanwhile, using both positive and negative samples consistently outperforms only using positive samples, which indicates the necessity of exposure to the errors made by the system.",
"tab:casererank compares extractions from RnnOIE before and after reranking. We can see the order is consistent with the annotation after reranking, showing the additional loss function's efficacy in calibrating the confidences; this is particularly common in extractions with long arguments. tab:casegen shows a positive extraction discovered after iterative training (first example), and a wrong extraction that disappears (second example), which shows that the model also becomes better at assertion generation.",
"Why is the performance still relatively low? We randomly sample 50 extractions generated at the best performing iteration and conduct an error analysis to answer this question. To count as a correct extraction, the number and order of the arguments should be exactly the same as the ground truth and syntactic heads must be included, which is challenging considering that the OIE2016 dataset has complex syntactic structures and multiple arguments per predicate.",
"We classify the errors into three categories and summarize their proportions in tab:err. “Overgenerated predicate” is where predicates not included in ground truth are overgenerated, because all the verbs are used as candidate predicates. An effective mechanism should be designed to reject useless candidates. “Wrong argument” is where extracted arguments do not coincide with ground truth, which is mainly caused by merging multiple arguments in ground truth into one. “Missing argument” is where the model fails to recognize arguments. These two errors usually happen when the structure of the sentence is complicated and coreference is involved. More linguistic information should be introduced to solve these problems."
],
[
"We propose a binary classification loss function to calibrate confidences in open IE. Iteratively optimizing the loss function enables the model to incrementally learn from trial and error, yielding substantial improvement. An error analysis is performed to shed light on possible future directions."
],
[
"This work was supported in part by gifts from Bosch Research, and the Carnegie Bosch Institute."
]
]
}
|
{
"question": [
"How does this compare to traditional calibration methods like Platt Scaling?",
"What's the input representation of OpenIE tuples into the model?"
],
"question_id": [
"ca7e71131219252d1fab69865804b8f89a2c0a8f",
"d77c9ede2727c28e0b5a240b2521fd49a19442e0"
],
"nlp_background": [
"two",
"two"
],
"topic_background": [
"familiar",
"familiar"
],
"paper_read": [
"no",
"no"
],
"search_query": [
"information extraction",
"information extraction"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "No reliability diagrams are provided and no explicit comparison is made between confidence scores or methods.",
"evidence": [
"Compared to using external models for confidence modeling, an advantage of the proposed method is that the base model does not change: the binary classification loss just provides additional supervision. Ideally, the resulting model after one-round of training becomes better not only at confidence modeling, but also at assertion generation, suggesting that extractions of higher quality can be added as training samples to continue this training process iteratively. The resulting iterative learning procedure (alg:iter) incrementally includes extractions generated by the current model as training samples to optimize the binary classification loss to obtain a better model, and this procedure is continued until convergence. [t] training data $\\mathcal {D}$ , initial model $\\theta ^{(0)}$ model after convergence $\\theta $ $t \\leftarrow 0$ # iteration",
"A key step in open IE is confidence modeling, which ranks a list of candidate extractions based on their estimated quality. This is important for downstream tasks, which rely on trade-offs between the precision and recall of extracted assertions. For instance, an open IE-powered medical question answering (QA) system may require its assertions in higher precision (and consequently lower recall) than QA systems for other domains. For supervised open IE systems, the confidence score of an assertion is typically computed based on its extraction likelihood given by the model BIBREF3 , BIBREF5 . However, we observe that this often yields sub-optimal ranking results, with incorrect extractions of one sentence having higher likelihood than correct extractions of another sentence. We hypothesize this is due to the issue of a disconnect between training and test-time objectives. Specifically, the system is trained solely to raise likelihood of gold-standard extractions, and during training the model is not aware of its test-time behavior of ranking a set of system-generated assertions across sentences that potentially include incorrect extractions.",
"We follow the evaluation metrics described by Stanovsky:2016:OIE2016: area under the precision-recall curve (AUC) and F1 score. An extraction is judged as correct if the predicate and arguments include the syntactic head of the gold standard counterparts."
],
"highlighted_evidence": [
"Compared to using external models for confidence modeling, an advantage of the proposed method is that the base model does not change: the binary classification loss just provides additional supervision.",
"For supervised open IE systems, the confidence score of an assertion is typically computed based on its extraction likelihood given by the model BIBREF3 , BIBREF5 ",
"We follow the evaluation metrics described by Stanovsky:2016:OIE2016: area under the precision-recall curve (AUC) and F1 score."
]
}
],
"annotation_id": [
"23c5a7ddd1f154488e822601198303f3e02cc4f7"
],
"worker_id": [
"74eea9f3f4f790836045fcc75d0b3f5156901499"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "word embeddings",
"evidence": [
"Our training method in sec:ours could potentially be used with any probabilistic open IE model, since we make no assumptions about the model and only the likelihood of the extraction is required for iterative rank-aware learning. As a concrete instantiation in our experiments, we use RnnOIE BIBREF3 , BIBREF9 , a stacked BiLSTM with highway connections BIBREF10 , BIBREF11 and recurrent dropout BIBREF12 . Input of the model is the concatenation of word embedding and another embedding indicating whether this word is predicate: $ \\mathbf {x}_t = [\\mathbf {W}_{\\text{emb}}(w_t), \\mathbf {W}_{\\text{mask}}(w_t = v)]. $"
],
"highlighted_evidence": [
"Input of the model is the concatenation of word embedding and another embedding indicating whether this word is predicate: $ \\mathbf {x}_t = [\\mathbf {W}_{\\text{emb}}(w_t), \\mathbf {W}_{\\text{mask}}(w_t = v)]. $"
]
}
],
"annotation_id": [
"250e402e903ac21b69fd0cc88469064e3efc5d04"
],
"worker_id": [
"74eea9f3f4f790836045fcc75d0b3f5156901499"
]
}
]
}
|
{
"caption": [
"Figure 1: Iterative rank-aware learning.",
"Table 1: Dataset statistics.",
"Table 2: Case study of reranking effectiveness. Red for predicate and blue for arguments.",
"Figure 2: AUC and F1 at different iterations.",
"Table 4: AUC and F1 on OIE2016.",
"Table 5: Proportions of three errors."
],
"file": [
"1-Figure1-1.png",
"3-Table1-1.png",
"4-Table2-1.png",
"4-Figure2-1.png",
"4-Table4-1.png",
"5-Table5-1.png"
]
}
|
1909.07863
|
Character-Centric Storytelling
|
Sequential vision-to-language or visual storytelling has recently been one of the areas of focus in computer vision and language modeling domains. Though existing models generate narratives that read subjectively well, there could be cases when these models miss out on generating stories that account and address all prospective human and animal characters in the image sequences. Considering this scenario, we propose a model that implicitly learns relationships between provided characters and thereby generates stories with respective characters in scope. We use the VIST dataset for this purpose and report numerous statistics on the dataset. Eventually, we describe the model, explain the experiment and discuss our current status and future work.
|
{
"section_name": [
"Introduction",
"Related work",
"Data",
"Data ::: Character extraction",
"Data ::: Character analysis",
"Model",
"Model ::: Character semantics",
"Model ::: Encoder",
"Model ::: Decoder",
"Experiments",
"Experiments ::: Method 1",
"Experiments ::: Method 2",
"Discussion",
"Conclusion"
],
"paragraphs": [
[
"Visual storytelling and album summarization tasks have recently been of focus in the domain of computer vision and natural language processing. With the advent of new architectures, solutions for problems like image captioning and language modeling are getting better. Therefore it is only natural to work towards storytelling; deeper visual context yielding a more expressive style language, as it could potentially improve various applications involving tasks using visual descriptions and visual question answering. BIBREF0.",
"Since the release of the VIST visual storytelling dataset BIBREF1, there have been numerous approaches modeling the behavior of stories, leveraging and extending successful sequence-to-sequence based image captioning architectures. Some of them primarily addressed means of incorporating image-sequence feature information into a narrative generating network BIBREF2, BIBREF3, while others focused on model learning patterns and behavioral orientations with changes in back-propagation methods BIBREF4, BIBREF5. Motivated by these works we now want to understand the importance of characters and their relationships in visual storytelling.",
"Specifically, we extract characters from the VIST dataset, analyze their influence across the dataset and exploit them for paying attention to relevant visual segments during story-generation. We report our findings, discuss the directions of our ongoing work and suggest recommendations for using characters as semantics in visual storytelling."
],
[
"BIBREF1 published the VIST dataset along with a baseline sequence-to-sequence learning model that generates stories for image sequences in the dataset. Gradually, as a result of the 2018 storytelling challenge, there have been other works on VIST. Most of them extended the encoder-decoder architecture introduced in the baseline publication by adding attention mechanisms BIBREF3, learning positionally dependent parameters BIBREF2 and using reinforcement learning based methods BIBREF4, BIBREF5.",
"To our best knowledge, there are no prior works making use of characters for visual storytelling. The only work that uses any additional semantics for story generation is BIBREF5. They propose a hierarchical model structure which first generates a “semantic topic\" for each image in the sequence and then uses that information during the generation phase. The core module of their hierarchical model is a Semantic Compositional Network (SCN) BIBREF6, a recurrent neural network variant generating text conditioned on the provided semantic concepts.",
"Unlike traditional attention mechanisms, the SCN assembles the information on semantics directly into the neural network cell. It achieves this by extending the gate and state weight matrices to adhere to additional semantic information provided for the language generation phase. Inspired by the results SCN achieved for image and video captioning, we use it for storytelling. The semantic concepts we use are based on character frequencies and their co-occurrence information extracted from the stories of the VIST dataset.",
"Our expectation is that the parameters of the language decoder network generating the story are dependent on the character semantics and would learn to capture linguistic patterns while simultaneously learning mappings to respective visual features of the image sequence."
],
[
"We used the Visual storytelling (VIST) dataset comprising of image sequences obtained from Flickr albums and respective annotated descriptions collected through Amazon Mechanical Turk BIBREF1. Each sequence has 5 images with corresponding descriptions that together make up for a story. Furthermore, for each Flickr album there are 5 permutations of a selected set of its images. In the overall available data there are 40,071 training, 4,988 validation, and 5,050 usable testing stories."
],
[
"We extracted characters out of the VIST dataset. To this end, we considered that a character is either “a person\" or “an animal\". We decided that the best way to do this would be by making use of the human-annotated text instead of images for the sake of being diverse (e.g.: detection on images would yield “person\", as opposed to father).",
"The extraction takes place as a two-step process:",
"Identification of nouns: We first used a pretrained part-of-speech tagger BIBREF7 to identify all kinds of nouns in the annotations. Specifically, these noun categories are NN – common, singular or mass, NNS – noun, common, plural, NNP – noun, proper, singular, and NNPS – noun, proper, plural.",
"Filtering for hypernyms: WordNet BIBREF8 is a lexical database over the English language containing various semantic relations and synonym sets. Hypernym is one such semantic relation constituting a category into which words with more specific meanings fall. From among the extracted nouns, we thereby filtered those words that have their lowest common hypernym as either “person\" or “animal\"."
],
[
"We analyzed the VIST dataset from the perspective of the extracted characters and observed that 20,405 training, 2,349 validation and 2,768 testing data samples have at least one character present among their stories. This is approximately 50% of the data samples in the entire dataset. To pursue the prominence of relationships between these characters, we analyzed these extractions for both individual and co-occurrence frequencies.",
"We found a total of 1,470 distinct characters with 1,333 in training, 387 in validation and 466 in the testing splits. This can be considered as an indication to the limited size of the dataset because the number of distinct characters within each split is strongly dependent on the respective size of that split.",
"Figure FIGREF3 plots the top 30 most frequent characters in the training split of the dataset. Apart from the character “friends\" there is a gradual decrease in the occurrence frequencies of the other characters from “mom\" to “grandmother\". Similarly, in Figure FIGREF4, which plots the top 30 most co-occurring character pairs, (“dad\", “mom\"), (“friend\", “friends\") pairs occur drastically more number of times than other pairs in the stories. This can lead to an inclination bias of the story generator towards these characters owing to the data size limitations we discussed.",
"In the process of detecting characters, we observed also that $\\sim $5000 distinct words failed on WordNet due to their misspellings (“webxites\"), for being proper nouns (“cathrine\"), for being an abbreviation (“geez\"), and simply because they were compound words (“sing-a-long\"). Though most of the models ignore these words based on a vocabulary threshold value (typically 3), we would like to comment that language model creation without accounting for these words could adversely affect the behavior of narrative generation."
],
[
"Our model in Figure FIGREF6 follows the encoder-decoder structure. The encoder module incorporates the image sequence features, obtained using a pretrained convolutional network, into a subject vector. The decoder module, a semantically compositional recurrent network (SCN) BIBREF6, uses the subject vector along with character probabilities and generates a relevant story."
],
[
"The relevant characters with respect to each data-sample are obtained as a preprocessing step. We denote characters extracted from the human-annotated stories of respective image-sequences as active characters. We then use these active characters to obtain other characters which could potentially influence the narrative to be generated. We denote these as passive characters and they can be obtained using various methods. We describe some methods we tried in Section SECREF5. The individual frequencies of these relevant characters, active and passive are then normalized by the vocabulary size and constitute the character probabilities."
],
[
"Images of a sequence are initially passed through a pretrained ResNet network BIBREF9, for obtaining their features. The features extracted are then provided to the encoder module, which is a simple recurrent neural network employed to learn parameters for incorporating the subjects in the individual feature sets into a subject vector."
],
[
"We use the SCN-LSTM variant of the recurrent neural network for the decoder module as shown in Figure FIGREF10. The network extends each weight matrix of the conventional LSTM to be an ensemble of a set of tag-dependent weight matrices, subjective to the character probabilities. Subject vector from the encoder is fed into the LSTM to initialize the first step. The LSTM parameters utilized when decoding are weighted by the character probabilities, for generating a respective story.",
"Gradients $\\nabla $, propagated back to the network, nudge the parameters $W$ to learn while adhering to respective character probabilities $\\vec{cp}$:",
"Consequently, the encoder parameters move towards incorporating the image-sequence features better."
],
[
"We report the current status of our work and the intended directions of progress we wish to make using the designed model. All experiments were performed on the VIST dataset.",
"As mentioned in Section SECREF5, passive characters can be selected by conditioning their relationships on several factors. We explain two such methods:"
],
[
"In the first method we naïvely select all the characters co-occurring with respective active characters. Subsequently, probabilities for these passive characters are co-occurrence counts normalized by the corpus vocabulary size. This method enables the model to learn parameters on the distribution of character relationships."
],
[
"In the second approach, we conditionally select a limited number of characters that collectively co-occur most with the respective active characters. This is visualized in Figure FIGREF13. The selected passive characters “girlfriend\", “father\" and “son\" collectively co-occur in the most co-occurring characters of the active characters. $K$ in this case is a tunable hyperparameter."
],
[
"Both methods we are experimenting with exhibit different initial traits. We are currently working towards analyzing the character relationships learned by the models and understanding the abstract concepts that get generated as a result of such learning. We do not report any generated stories and evaluations yet as we consider that to be premature without proper examination. However, we feel the training process metrics are encouraging and provide us with enough intuition for pursuing the proposed approach to its fullest scope."
],
[
"We have extracted, analyzed and exploited characters in the realm of storytelling using the VIST dataset. We have provided a model that can make use of the extracted characters to learn their relationships and thereby generate grounded and subjective narratives for respective image sequences. For future work we would like to make the encoder semantically compositional by extracting visual tags and also explore ways to improve learning of character relationships while avoiding overfitting."
]
]
}
|
{
"question": [
"What statistics on the VIST dataset are reported?"
],
"question_id": [
"a9610cbcca813f4376fbfbf21cc14689c7fbd677"
],
"nlp_background": [
"zero"
],
"topic_background": [
"familiar"
],
"paper_read": [
"no"
],
"search_query": [
"computer vision"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"In the overall available data there are 40,071 training, 4,988 validation, and 5,050 usable testing stories."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We used the Visual storytelling (VIST) dataset comprising of image sequences obtained from Flickr albums and respective annotated descriptions collected through Amazon Mechanical Turk BIBREF1. Each sequence has 5 images with corresponding descriptions that together make up for a story. Furthermore, for each Flickr album there are 5 permutations of a selected set of its images. In the overall available data there are 40,071 training, 4,988 validation, and 5,050 usable testing stories."
],
"highlighted_evidence": [
"We used the Visual storytelling (VIST) dataset comprising of image sequences obtained from Flickr albums and respective annotated descriptions collected through Amazon Mechanical Turk BIBREF1. Each sequence has 5 images with corresponding descriptions that together make up for a story. Furthermore, for each Flickr album there are 5 permutations of a selected set of its images. In the overall available data there are 40,071 training, 4,988 validation, and 5,050 usable testing stories."
]
}
],
"annotation_id": [
"0fb4bdc1c9e4e5c0f5f9d97660b1a8511f3bae0a"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
]
}
|
{
"caption": [
"Figure 1: Character frequencies (training split)",
"Figure 2: Characters co-occurrence frequencies (training split)",
"Figure 3: The model follows the encoder-decoder structure. Additional character semantics passed to the decoder module regulate its state parameters.",
"Figure 4: (Gan et al., 2016), v and s denote the visual and semantic features respectively. Each triangle symbol represents an ensemble of tag dependent weight matrices",
"Figure 5: Conditional on collective co-occurrences"
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"3-Figure3-1.png",
"4-Figure4-1.png",
"4-Figure5-1.png"
]
}
|
1906.07234
|
Combining Adversarial Training and Disentangled Speech Representation for Robust Zero-Resource Subword Modeling
|
This study addresses the problem of unsupervised subword unit discovery from untranscribed speech. It forms the basis of the ultimate goal of ZeroSpeech 2019, building text-to-speech systems without text labels. In this work, unit discovery is formulated as a pipeline of phonetically discriminative feature learning and unit inference. One major difficulty in robust unsupervised feature learning is dealing with speaker variation. Here the robustness towards speaker variation is achieved by applying adversarial training and FHVAE based disentangled speech representation learning. A comparison of the two approaches as well as their combination is studied in a DNN-bottleneck feature (DNN-BNF) architecture. Experiments are conducted on ZeroSpeech 2019 and 2017. Experimental results on ZeroSpeech 2017 show that both approaches are effective while the latter is more prominent, and that their combination brings further marginal improvement in across-speaker condition. Results on ZeroSpeech 2019 show that in the ABX discriminability task, our approaches significantly outperform the official baseline, and are competitive to or even outperform the official topline. The proposed unit sequence smoothing algorithm improves synthesis quality, at a cost of slight decrease in ABX discriminability.
|
{
"section_name": [
"Introduction",
"General framework",
"Speaker-invariant feature learning by FHVAEs",
"Speaker adversarial multi-task learning",
"Subword unit inference and smoothing",
"Dataset and evaluation metric",
"System setup",
"Experimental results",
"Dataset and evaluation metrics",
"Conclusions",
"Acknowledgements"
],
"paragraphs": [
[
"Nowadays speech processing is dominated by deep learning techniques. Deep neural network (DNN) acoustic models (AMs) for the tasks of automatic speech recognition (ASR) and speech synthesis have shown impressive performance for major languages such as English and Mandarin. Typically, training a DNN AM requires large amounts of transcribed data. For a large number of low-resource languages, for which very limited or no transcribed data are available, conventional methods of acoustic modeling are ineffective or even inapplicable.",
"In recent years, there has been an increasing research interest in zero-resource speech processing, i.e., only a limited amount of raw speech data (e.g. hours or tens of hours) are given while no text transcriptions or linguistic knowledge are available. The Zero Resource Speech Challenges (ZeroSpeech) 2015 BIBREF0 , 2017 BIBREF1 and 2019 BIBREF2 precisely focus on this area. One problem tackled by ZeroSpeech 2015 and 2017 is subword modeling, learning frame-level speech representation that is discriminative to subword units and robust to linguistically-irrelevant factors such as speaker change. The latest challenge ZeroSpeech 2019 goes a step further by aiming at building text-to-speech (TTS) systems without any text labels (TTS without T) or linguistic expertise. Specifically, one is required to build an unsupervised subword modeling sub-system to automatically discover phoneme-like units in the concerned language, followed by applying the learned units altogether with speech data from which the units are inferred to train a TTS. Solving this problem may partially assist psycholinguists in understanding young children's language acquisition mechanism BIBREF2 .",
"This study addresses unsupervised subword modeling in ZeroSpeech 2019, which is also referred to as acoustic unit discovery (AUD). It is an essential problem and forms the basis of TTS without T. The exact goal of this problem is to represent untranscribed speech utterances by discrete subword unit sequences, which is slightly different from subword modeling in the contexts of ZeroSpeech 2017 & 2015. In practice, it can be formulated as an extension to the previous two challenges. For instance, after learning the subword discriminative feature representation at frame-level, the discrete unit sequences can be inferred by applying vector quantization methods followed by collapsing consecutive repetitive symbolic patterns. In the previous two challenges, several unsupervised representation learning approaches were proposed for comparison, such as cluster posteriorgrams (PGs) BIBREF3 , BIBREF4 , BIBREF5 , DNN bottleneck features BIBREF6 , BIBREF7 , autoencoders (AEs) BIBREF8 , BIBREF9 , variational AEs (VAEs) BIBREF10 , BIBREF11 and siamese networks BIBREF12 , BIBREF13 , BIBREF14 .",
"One major difficulty in unsupervised subword modeling is dealing with speaker variation. The huge performance degradation caused by speaker variation reported in ZeroSpeech 2017 BIBREF1 implies that speaker-invariant representation learning is crucial and remains to be solved. In ZeroSpeech 2019, speaker-independent subword unit inventory is highly desirable in building a TTS without T system. In the literature, many works focused on improving the robustness of unsupervised feature learning towards speaker variation. One direction is to apply linear transform methods. Heck et al. BIBREF5 estimated fMLLR features in an unsupervised manner. Works in BIBREF6 , BIBREF15 estimated fMLLR using a pre-trained out-of-domain ASR. Chen et al. BIBREF7 applied vocal tract length normalization (VTLN). Another direction is to employ DNNs. Zeghidour et al. BIBREF13 proposed to train subword and speaker same-different tasks within a triamese network and untangle linguistic and speaker information. Chorowski et al. BIBREF11 defined a speaker embedding as a condition of VAE decoder to free the encoder from capturing speaker information. Tsuchiya et al. BIBREF16 applied speaker adversarial training in a task related to the zero-resource scenario but transcription for a target language was used in model training.",
"In this paper, we propose to extend our recent research findings BIBREF10 on applying disentangled speech representation learned from factorized hierarchical VAE (FHVAE) models BIBREF17 to improve speaker-invariant subword modeling. The contributions made in this study are in several aspects. First, the FHVAE based speaker-invariant learning is compared with speaker adversarial training in the strictly unsupervised scenario. Second, the combination of adversarial training and disentangled representation learning is studied. Third, our proposed approaches are evaluated on the latest challenge ZeroSpeech 2019, as well as on ZeroSpeech 2017 for completeness. To our best knowledge, direct comparison of the two approaches and their combination has not been studied before."
],
[
"The general framework of our proposed approaches is illustrated in Figure FIGREF2 . Given untranscribed speech data, the first step is to learn speaker-invariant features to support frame labeling. The FHVAE model BIBREF17 is adopted for this purpose. FHVAEs disentangle linguistic content and speaker information encoded in speech into different latent representations. Compared with raw MFCC features, FHVAE reconstructed features conditioned on latent linguistic representation are expected to keep linguistic content unchanged and are more speaker-invariant. Details of the FHVAE structure and feature reconstruction methods are described in Section SECREF3 .",
"The reconstructed features are fed as inputs to Dirichlet process Gaussian mixture model (DPGMM) BIBREF18 for frame clustering, as was done in BIBREF3 . The frame-level cluster labels are regarded as pseudo phone labels to support supervised DNN training. Motivated by successful applications of adversarial training BIBREF19 in a wide range of domain invariant learning tasks BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , this work proposes to add an auxiliary adversarial speaker classification task to explicitly target speaker-invariant feature learning. After speaker adversarial multi-task learning (AMTL) DNN training, softmax PG representation from pseudo phone classification task is used to infer subword unit sequences. The resultant unit sequences are regarded as pseudo transcriptions for subsequent TTS training."
],
[
"The FHVAE model formulates the generation process of sequential data by imposing sequence-dependent and sequence-independent priors to different latent variables BIBREF17 . It consists of an inference model INLINEFORM0 and a generation model INLINEFORM1 . Let INLINEFORM2 denote a speech dataset with INLINEFORM3 sequences. Each INLINEFORM4 contains INLINEFORM5 speech segments INLINEFORM6 , where INLINEFORM7 is composed of fixed-length consecutive frames. The FHVAE model generates a sequence INLINEFORM8 from a random process as follows: (1) An s-vector INLINEFORM9 is drawn from a prior distribution INLINEFORM10 ; (2) Latent segment variables INLINEFORM11 and latent sequence variables INLINEFORM12 are drawn from INLINEFORM13 and INLINEFORM14 respectively; (3) Speech segment INLINEFORM15 is drawn from INLINEFORM16 . Here INLINEFORM17 denotes standard normal distribution, INLINEFORM18 and INLINEFORM19 are parameterized by DNNs. The joint probability for INLINEFORM20 is formulated as, DISPLAYFORM0 ",
"Since the exact posterior inference is intractable, the FHVAE introduces an inference model INLINEFORM0 to approximate the true posterior, DISPLAYFORM0 ",
"Here INLINEFORM0 and INLINEFORM1 are all diagonal Gaussian distributions. The mean and variance values of INLINEFORM2 and INLINEFORM3 are parameterized by two DNNs. For INLINEFORM4 , during FHVAE training, a trainable lookup table containing posterior mean of INLINEFORM5 for each sequence is updated. During testing, maximum a posteriori (MAP) estimation is used to infer INLINEFORM6 for unseen test sequences. FHVAEs optimize the discriminative segmental variational lower bound which was defined in BIBREF17 . It contains a discriminative objective to prevent INLINEFORM7 from being the same for all utterances.",
"After FHVAE training, INLINEFORM0 encodes segment-level factors e.g. linguistic information, while INLINEFORM1 encodes sequence-level factors that are relatively consistent within an utterance. By concatenating training utterances of the same speaker into a single sequence for FHVAE training, the learned INLINEFORM2 is expected to be discriminative to speaker identity. This work considers applying s-vector unification BIBREF10 to generate reconstructed feature representation that keeps linguistic content unchanged and is more speaker-invariant than the original representation. Specifically, a representative speaker with his/her s-vector (denoted as INLINEFORM3 ) is chosen from the dataset. Next, for each speech segment INLINEFORM4 of an arbitrary speaker INLINEFORM5 , its corresponding latent sequence variable INLINEFORM6 inferred from INLINEFORM7 is transformed to INLINEFORM8 , where INLINEFORM9 denotes the s-vector of speaker INLINEFORM10 . Finally the FHVAE decoder reconstructs speech segment INLINEFORM11 conditioned on INLINEFORM12 and INLINEFORM13 . The features INLINEFORM14 form our desired speaker-invariant representation."
],
[
"Speaker adversarial multi-task learning (AMTL) simultaneously trains a subword classification network ( INLINEFORM0 ), a speaker classification network ( INLINEFORM1 ) and a shared-hidden-layer feature extractor ( INLINEFORM2 ), where INLINEFORM3 and INLINEFORM4 are set on top of INLINEFORM5 , as illustrated in Figure FIGREF2 . In AMTL, the error is reversely propagated from INLINEFORM6 to INLINEFORM7 such that the output layer of INLINEFORM8 is forced to learn speaker-invariant features so as to confuse INLINEFORM9 , while INLINEFORM10 tries to correctly classify outputs of INLINEFORM11 into their corresponding speakers. At the same time, INLINEFORM12 learns to predict the correct DPGMM labels of input features, and back-propagate errors to INLINEFORM13 in a usual way.",
"Let INLINEFORM0 and INLINEFORM1 denote the network parameters of INLINEFORM2 and INLINEFORM3 , respectively. With the stochastic gradient descent (SGD) algorithm, these parameters are updated as, p p - Lpp, s s - Lss,",
"h h -[Lph - Lsh], where INLINEFORM0 is the learning rate, INLINEFORM1 is the adversarial weight, INLINEFORM2 and INLINEFORM3 are the loss values of subword and speaker classification tasks respectively, both in terms of cross-entropy. To implement Eqt. ( SECREF6 ), a gradient reversal layer (GRL) BIBREF19 was designed to connect INLINEFORM4 and INLINEFORM5 . The GRL acts as identity transform during forward-propagation and changes the sign of loss during back-propagation. After training, the output of INLINEFORM6 is speaker-invariant and subword discriminative bottleneck feature (BNF) representation of input speech. Besides, the softmax output representation of INLINEFORM7 is believed to carry less speaker information than that without performing speaker adversarial training."
],
[
"Subword unit sequences for the concerned untranscribed speech utterances are inferred from softmax PG representation of INLINEFORM0 in the speaker AMTL DNN. For each input frame to the DNN, the DPGMM label with the highest probability in PG representation is regarded as the subword unit assigned to this frame. These frame-level unit labels are further processed by collapsing consecutive repetitive labels to form pseudo transcriptions.",
"We observed non-smoothness in the inferred unit sequences by using the above methods, i.e., frame-level unit labels that are isolated without temporal repetition. Considering that ground-truth phonemes generally span at least several frames, these non-smooth labels are unwanted. This work proposes an empirical method to filter out part of the non-smooth unit labels, which is summarized in Algorithm SECREF7 .",
"[h] Frame-level unit labels INLINEFORM0 Pseudo transcription INLINEFORM1 INLINEFORM2 }, where INLINEFORM3 , INLINEFORM4 for INLINEFORM5 INLINEFORM6 INLINEFORM7 INLINEFORM8 ; INLINEFORM9 INLINEFORM10 Unit sequence smoothing"
],
[
"ZeroSpeech 2017 development dataset consists of three languages, i.e. English, French and Mandarin. Speaker information for training sets are given while unknown for test sets. The durations of training sets are INLINEFORM0 and INLINEFORM1 hours respectively. Detailed information of the dataset can be found in BIBREF1 .",
"The evaluation metric is ABX subword discriminability. Basically, it is to decide whether INLINEFORM0 belongs to INLINEFORM1 or INLINEFORM2 if INLINEFORM3 belongs to INLINEFORM4 and INLINEFORM5 belongs to INLINEFORM6 , where INLINEFORM7 and INLINEFORM8 are speech segments, INLINEFORM9 and INLINEFORM10 are two phonemes that differ in the central sound (e.g., “beg”-“bag”). Each pair of INLINEFORM11 and INLINEFORM12 is spoken by the same speaker. Depending on whether INLINEFORM13 and INLINEFORM14 are spoken by the same speaker, ABX error rates for across-/within-speaker are evaluated separately."
],
[
"The FHVAE model is trained with merged training sets of all three target languages. Input features are fixed-length speech segments of 10 frames. Each frame is represented by a 13-dimensional MFCC with cepstral mean normalization (CMN) at speaker level. During training, speech utterances spoken by the same speaker are concatenated to a single training sequence. During the inference of hidden variables INLINEFORM0 and INLINEFORM1 , input segments are shifted by 1 frame. To match the length of latent variables with original features, the first and last frame are padded. To generate speaker-invariant reconstructed MFCCs using the s-vector unification method, a representative speaker is selected from training sets. In this work the English speaker “s4018” is chosen. The encoder and decoder networks of the FHVAE are both 2-layer LSTM with 256 neurons per layer. Latent variable dimensions for INLINEFORM2 and INLINEFORM3 are 32. FHVAE training is implemented by using an open-source tool BIBREF17 .",
"The FHVAE based speaker-invariant MFCC features with INLINEFORM0 and INLINEFORM1 are fed as inputs to DPGMM clustering. Training data for the three languages are clustered separately. The numbers of clustering iterations for English, French and Mandarin are INLINEFORM2 and 1400. After clustering, the numbers of clusters are INLINEFORM3 and 314. The obtained frame labels support multilingual DNN training. DNN input features are MFCC+CMVN. The layer-wise structure of INLINEFORM4 is INLINEFORM5 . Nonlinear function is sigmoid, except the linear BN layer. INLINEFORM6 contains 3 sub-networks, one for each language. The sub-network contains a GRL, a feed-forward layer (FFL) and a softmax layer. The GRL and FFL are 1024-dimensional. INLINEFORM7 also contains 3 sub-networks, each having a 1024-dimensional FFL and a softmax layer. During AMTL DNN training, the learning rate starts from INLINEFORM8 to INLINEFORM9 with exponential decay. The number of epochs is 5. Speaker adversarial weight INLINEFORM10 ranges from 0 to INLINEFORM11 . After training, BNFs extracted from INLINEFORM12 are evaluated by the ABX task. DNN is implemented using Kaldi BIBREF24 nnet3 recipe. DPGMM is implemented using tools developed by BIBREF18 .",
"DPGMM clustering towards raw MFCC features is also implemented to generate alternative DPGMM labels for comparison. In this case, the numbers of clustering iterations for the three languages are INLINEFORM0 and 3000. The numbers of clusters are INLINEFORM1 and 596. The DNN structure and training procedure are the same as mentioned above.",
"FHVAE model training and speaker-invariant MFCC reconstruction are performed following the configurations in ZeroSpeech 2017. The unit dataset is used for training. During MFCC reconstruction, a male speaker for each of the two languages is randomly selected as the representative speaker for s-vector unification. Our recent research findings BIBREF10 showed that male speakers are more suitable than females in generating speaker-invariant features. The IDs of the selected speakers are “S015” and “S002” in English and Surprise respectively. In DPGMM clustering, the numbers of clustering iterations are both 320. Input features are reconstructed MFCCs+ INLINEFORM0 + INLINEFORM1 . After clustering, the numbers of clusters are 518 and 693. The speaker AMTL DNN structure and training procedure follow configurations in ZeroSpeech 2017. One difference is the placement of adversarial sub-network INLINEFORM2 . Here INLINEFORM3 is put on top of the FFL in INLINEFORM4 instead of on top of INLINEFORM5 . Besides, the DNN is trained in a monolingual manner. After DNN training, PGs for voice and test sets are extracted. BNFs for test set are also extracted. Adversarial weights INLINEFORM6 ranging from 0 to INLINEFORM7 with a step size of INLINEFORM8 are evaluated on English test set.",
"The TTS model is trained with voice dataset and their subword unit sequences inferred from PGs. TTS training is implemented using tools BIBREF27 in the same way as in the baseline. The trained TTS synthesizes speech waveforms according to unit sequences inferred from test speech utterances. Algorithm SECREF7 is applied to voice set and optionally applied to test set."
],
[
"Average ABX error rates on BNFs over three target languages with different values of INLINEFORM0 are shown in Figure FIGREF11 .",
"In this Figure, INLINEFORM0 denotes that speaker adversarial training is not applied. From the dashed (blue) lines, it can be observed that speaker adversarial training could reduce ABX error rates in both across- and within-speaker conditions, with absolute reductions of INLINEFORM1 and INLINEFORM2 respectively. The amount of improvement is in accordance with the findings reported in BIBREF16 , despite that BIBREF16 exploited English transcriptions during training. The dash-dotted (red) lines show that when DPGMM labels generated by reconstructed MFCCs are employed in DNN training, the positive impact of speaker adversarial training in across-speaker condition is relatively limited. Besides, negative impact is observed in within-speaker condition. From Figure FIGREF11 , it can be concluded that for the purpose of improving the robustness of subword modeling towards speaker variation, frame labeling based on disentangled speech representation learning is more prominent than speaker adversarial training.",
"ABX error rates on subword unit sequences, PGs and BNFs with different values of INLINEFORM0 evaluated on English test set are shown in Figure FIGREF16 .",
"Algorithm SECREF7 is not applied at this stage. It is observed that speaker adversarial training could achieve INLINEFORM0 and INLINEFORM1 absolute error rate reductions on PG and BNF representations. The unit sequence representation does not benefit from adversarial training. Therefore, the optimal INLINEFORM2 for unit sequences is 0. The performance gap between frame-level PGs and unit sequences measures the phoneme discriminability distortion caused by the unit inference procedure in this work.",
"We fix INLINEFORM0 to train the TTS model, and synthesize test speech waveforms using the trained TTS. Experimental results of our submission systems are summarized in Table TABREF17 .",
"In this Table, “+SM” denotes applying sequence smoothing towards test set unit labels. Compared with the official baseline, our proposed approaches could significantly improve unit quality in terms of ABX discriminability. Our system without applying SM achieves INLINEFORM0 and INLINEFORM1 absolute error rate reductions in English and Surprise sets. If SM is applied, while the ABX error rate increases, improvements in all the other evaluation metrics are observed. This implies that for the goal of speech synthesis, there is a trade off between quality and quantity of the learned subword units. Besides, our ABX performance is competitive to, or even better than the supervised topline.",
"Our systems do not outperform baseline in terms of synthesis quality. One possible explanation is that our learned subword units are much more fine-grained than those in the baseline AUD, making the baseline TTS less suitable for our AUD system. In the future, we plan to investigate on alternative TTS models to take full advantage of our learned subword units."
],
[
"ZeroSpeech 2019 BIBREF2 provides untranscribed speech data for two languages. English is used for development while the surprise language (Indonesian) BIBREF25 , BIBREF26 is used for test only. Each language pack consists of training and test sets. The training set consists of a unit discovery dataset for building unsupervised subword models, and a voice dataset for training the TTS system. Details of ZeroSpeech 2019 datasets are listed in Table TABREF13 .",
"There are two categories of evaluation metrics in ZeroSpeech 2019. The metrics for text embeddings, e.g. subword unit sequences, BNFs and PGs, are ABX discriminability and bitrate. Bitrate is defined as the amount of information provided in the inferred unit sequences. The metrics for synthesized speech waveforms are character error rate (CER), speaker similarity (SS, 1 to 5, larger is better) and mean opinion score (MOS, 1 to 5, larger is better), all evaluated by native speakers."
],
[
"This study tackles robust unsupervised subword modeling in the zero-resource scenario. The robustness towards speaker variation is achieved by combining speaker adversarial training and FHVAE based disentangled speech representation learning. Our proposed approaches are evaluated on ZeroSpeech 2019 and ZeroSpeech 2017. Experimental results on ZeroSpeech 2017 show that both approaches are effective while the latter is more prominent, and that their combination brings further marginal improvement in across-speaker condition. Results on ZeroSpeech 2019 show that our approaches achieve significant ABX error rate reduction to the baseline system. The proposed unit sequence smoothing algorithm improves synthesis quality, at a cost of slight decrease in ABX discriminability."
],
[
"This research is partially supported by the Major Program of National Social Science Fund of China (Ref:13&ZD189), a GRF project grant (Ref: CUHK 14227216) from Hong Kong Research Grants Council and a direct grant from CUHK Research Committee."
]
]
}
|
{
"question": [
"What is the performance difference in performance in unsupervised feature learning between adverserial training and FHVAE-based disentangled speech represenation learning?"
],
"question_id": [
"64ab2b92e986e0b5058bf4f1758e849f6a41168b"
],
"nlp_background": [
"infinity"
],
"topic_background": [
"unfamiliar"
],
"paper_read": [
"no"
],
"search_query": [
""
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"answers": [
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"0fc687f6d31b9dd5828bd8b28cbef135d1dd1ea7"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
}
|
{
"caption": [
"Figure 1: General framework of our proposed approaches",
"Figure 2: Average ABX error rates on BNF over 3 languages",
"Table 2: Comparison of baseline, topline and our submission",
"Figure 3: ABX error rates on unit sequence, PG and BNF with different adversarial weights evaluated on English test set"
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"4-Table2-1.png",
"4-Figure3-1.png"
]
}
|
1806.04535
|
Automatic Target Recovery for Hindi-English Code Mixed Puns
|
In order for our computer systems to be more human-like, with a higher emotional quotient, they need to be able to process and understand intrinsic human language phenomena like humour. In this paper, we consider a subtype of humour - puns, which are a common type of wordplay-based jokes. In particular, we consider code-mixed puns which have become increasingly mainstream on social media, in informal conversations and advertisements and aim to build a system which can automatically identify the pun location and recover the target of such puns. We first study and classify code-mixed puns into two categories namely intra-sentential and intra-word, and then propose a four-step algorithm to recover the pun targets for puns belonging to the intra-sentential category. Our algorithm uses language models, and phonetic similarity-based features to get the desired results. We test our approach on a small set of code-mixed punning advertisements, and observe that our system is successfully able to recover the targets for 67% of the puns.
|
{
"section_name": [
"Introduction",
"Puns",
"Code-mixing",
"Methodology",
"Classification",
"Dataset",
"Model",
"Results and discussion",
"Conclusion and Future work",
"Acknowledgements"
],
"paragraphs": [
[
"Humour is one of the most complex and intriguing phenomenon of the human language. It exists in various forms, across space and time, in literature and culture, and is a valued part of human interactions. Puns are one of the simplest and most common forms of humour in the English language. They are also one of the most widespread forms of spontaneous humour BIBREF0 and have found their place in casual conversations, literature, online comments, tweets and advertisements BIBREF1 , BIBREF2 . Puns are a hugely versatile and commonly used literary device and it is essential to include them in any comprehensive approach to computational humour.",
"In this paper, we consider Hindi-English code-mixed puns and aim to automatically recover their targets. The target of a pun is its phonologically similar counterpart, the relationship to which and whose resolution (recovery) in the mind of the listener/hearer induces humour. For example, in the pun “The life of a patient of hypertension is always at steak.\" the word “steak\" is the pun with target “stake\".",
"With India being a diverse linguistic region, there is an ever increasing usage of code-mixed Hindi-English language (along with various others) because bilingualism and even multilingualism are quite common. Consequently, we have also seen an increase in the usage of code-mixed language in online forums, advertisements etc. Code-mixed humour, especially puns have become increasingly popular because being able to use the same punning techniques but with two languages in play has opened up numerous avenues for new and interesting wordplays. With the increasing popularity and acceptance for the usage of code-mixed language, it has become important that computers are also able to process it and even decipher complex phenomena like humour. Traditional Word Sense Disambiguation (WSD) based methods cannot be used in target recovery of code-mixed puns, because they are no longer about multiple senses of a single word but about two words from two different languages. Code-switching comes with no markers, and the punning word may not even be a word in either of the languages being used. Sometimes words from the two languages can be combined to form a word which only a bilingual speaker would understand. Hence, this task on such data calls for a different set of strategies altogether. We approach this problem in two parts. First, we analyze the types of structures in code-mixed puns and classify them into two categories namely intra-sequential and intra-word. Second, we develop a four stage pipeline to achieve our goal - Language Identification, Pun Candidate Identification, Context Lookup and Phonetic Distance Minimization. We then test our approach on a small dataset and note that our method is successfully able to recover targets for a majority of the puns.",
"To the best of our knowledge, this is a first attempt at dealing with code-mixed puns. The outline of the paper is as follows: Section 2 gives a brief description of the background and prior work on puns - both in the field of linguistics and in the field of computational humour, along with a brief introduction to the field of code-mixing. Section 3 defines our problem statement, our classification model on code-mixed puns, the dataset we use to test our approach, and our proposed model for the task of automatic target recovery of Hindi-English code-mixed puns. In Section 4, we analyse the performance of our model on a set of puns, and discuss the various error cases. Finally, we conclude in Section 5 with a review of our research contributions and an outline of our plans for future work."
],
[
"Puns are a form of wordplay jokes in which one sign (e.g. a word or a phrase) suggests two or more meanings by exploiting polysemy, homonymy, or phonological similarity to another sign, for an intended humorous or rhetorical effect BIBREF3 . Puns where the two meanings share the same pronunciation are known as homophonic or perfect puns, while those relying on similar but non-identical sounding words are known as heterophonic BIBREF4 or imperfect puns BIBREF5 . In this paper, we study automatic target recoverability of English-Hindi code mixed puns - which are more commonly imperfect puns, but may also be perfect puns in some cases.",
"Zwicky and Zwicky zwicky1986imperfect, Sobkowiak sobkowiak1991metaphonology extensively studied various phonological variations in imperfect puns such as strong asymmetry in phoneme substitution. They note that puns show more frequent changes in vowels than in consonants because of their smaller role in target recoverability.",
"Puns have received attention in the field of computational humour, both in generation of puns and their understanding.",
"Generation: One of the earliest attempts at generating humour was by Lessard and Levin lessard1992computational, when they built an antonym-based system to generate Tom Swifties. Since then, we have seen various other attempts at the task with different strategies. JAPE was a system which exploited framing and phonetic relationships to automatically generate funny punning riddles, or more specifically phonologically ambiguous riddles, having noun phrase punchlines BIBREF6 . Venour venour1999computational built a system which generated HCPPs (Homonym Common Phrase Pun), simple 2 sentence puns based on associations between words occurring in common phrases. WisCraic was a system built by McKay mckay2002generation, which generated simple one-sentence puns based on semantic associations of words. Valitutti et al. valitutti2008textual attempted to automatically generate advertisements by punning on familiar expressions, with an affective connotation.",
"Identification and understanding: Hempelmann hempelmann2003paronomasic studied target recoverability, arguing that a good model for it provides necessary groundwork for effective automatic pun generation. He worked on a theory which models prominent factors in punning such as phonological similarity and studied how these measures could be used to evaluate possible imperfect puns given an input word and a set of target words.",
"Yokogawa yokogawa2002japanese analyzed ungrammatical Japanese puns and generated target candidates by replacing ungrammatical parts of the sentence by similar expressions. Taylor and Mazlack taylor2004computationally worked on computational recognition of word-play in the restricted domain of Knock-Knock jokes. Jaech et al. jaech2016phonological developed a computational model for target recovery of puns using techniques for automatic speech recognition, and learned phone edit probabilities in puns. Miller and Gurevych Miller2015AutomaticDO, Miller et al.miller2017semeval describe different methods on pun identification and disambiguation. Word Sense Disambiguation (WSD) based techniques are most common among the methods used.",
"To the best of our knowledge no prior work has been attempted on code-mixed puns."
],
[
"Code-mixing is the mixing of two or more languages or language varieties. Code-mixing is now recognized as a natural part of bilingual and multilingual language use. Significant linguistic efforts have been made to understand the sociological and conversational necessity behind code-switching BIBREF7 ; for example, to understand whether it is an act of identity in a social group, or a consequence of a lack of competence in either of the languages. These papers distinguish between inter-sentence, intra-sentence and intra-word code mixing.",
"Different types of language mixing phenomena have been discussed and defined by several linguists, with some making clear distinctions between phenomena based on certain criteria, while others use `code-mixing’ or `code-switching’ as umbrella terms to include any type of language mixing — see, e.g., Muysken muysken1995code or Gafaranga and Torras gafaranga2002interactional. In this paper, we use both these terms ‘code-mixing’ and `code-switching' interchangeably.",
"Coming to the work on automatic analysis of code-mixed languages, there have been studies on detecting code mixing in spoken language as well as different types of short texts, such as information retrieval queries BIBREF8 , SMS messages BIBREF9 , BIBREF10 , social media data BIBREF11 and online conversations BIBREF12 . These scholars have carried out experiments for the task of language identification using language models, dictionaries, logistic regression classification, Conditional Random Fields, SVMs, and noted that approaches using contextual knowledge were most robust. King and Abney king2013labeling used weakly semi-supervised methods to perform word-level language identification.",
"We however, use a dictionary based approach for the language identification task. While working with puns, ambiguity in language identification can be an important marker for identifying the pun, so it is more important for us to recognize all possible ambiguities rather than picking just one depending on probabilities. This ability to recognize ambiguities, and the simplicity of a dictionary-based language identification model makes it suited for this task."
],
[
"We focus on the task of automatically disambiguating or recovering Hindi-English code mixed puns. For this purpose, it is first necessary to understand what these puns are."
],
[
"For the purposes of this research, we only consider puns where the ambiguity or the wordplay lies in the code-switching i.e, the pun word and its target are from different languages. For example the pun \"Rivers can't hear because woh behri hoti hai.\" is a sentence with the pun being behri (meaning deaf) and its target being beh rahi (meaning flowing). Here, while the sentence is code-mixed, the pun word and the target both belong to the same language. We do not consider such puns for the present study.",
"We analyze the structure of code-mixed puns with the pun word and its target belonging to different languages and propose two broad categories to classify them in - puns where the code-mixing is intra-sentential and the other where it is intra-word. Both these categories are explained below, while we evaluate only on the former category.",
"Intra-sentential code-mixing is where code-switching occurs within a sentence. Here, the language varies at the word level. Also, each word of the sentence belongs to one or the other language. Table 1 gives examples of puns belonging to this category.",
"In this category, code mixing is present within a word. New words are formed using Portmanteau or Blending where two or more syllables/phonemes from different languages are blended together to form a single word, resulting in a word which is phonetically similar to the target word. Table 2 illustrates examples of intra-word code-mixed puns."
],
[
"Most puns we hear or use in everyday conversations are rarely recorded. One of the most common resources to find recorded puns are advertisements, for example the highly creative and frequently released Amul advertisements in India BIBREF1 . Most of these are contextually integrated BIBREF0 with an image. While such puns may lose their humour out of context, it is still possible to recover their targets, so using these does not affect our task in any way",
"To create a dataset to test our model on, we collected 518 advertisements released by Amul in the years 2014, 2015, 2017 and 2018, from their official web page. Of these, 333 were puns, including 121 code-mixed puns as defined in Section 3.1. We extracted the text of these 121 code-mixed puns and asked 3 people to disambiguate them, given just the advertisement text. All three annotators were university students in 22-23 years age group, native Hindi speakers with bilingual fluency in English. The annotators were asked to identify the location of the pun in each of the advertisements and write down the target of the pun. Any disagreements between annotators were resolved by mutual discussion.",
"In a few cases where puns were identified to have multiple targets, we kept all such possibilities in our dataset. A few puns were identified to be non-recoverable because of the lack of contextual knowledge, while a few puns had multiple pun locations. We removed both these types from our dataset, which left us with 110 puns.",
"Finally, we divided these 110 annotated puns into the two categories as defined in Section 3.1 thereby getting 51 advertisements categorized as intra-sentential code-mixed puns, and the rest as intra-word code-mixed puns. We use the former as our test data."
],
[
"For preprocessing the text we give as input to our system, we first tokenize the advertisement text using NLTK's BIBREF13 tokenizer and remove all punctuations. We then give the resultant tokens as input to our model, which is a 4 step process as described below:",
"At this step, we aim to identify the language of each of the tokens in the input text by classifying them into one of the 5 categories: English, Hindi, Named Entity (NE), Out of Vocabulary (OOV), or Ambiguous (words that could belong to both English and Hindi).",
"We use a dictionary-based lookup method to classify a word in English or Hindi. Since the input is in Roman script, to recognize Hindi words, we use a list of 30k transliterated Hindi words in Roman to their Devanagari counterparts BIBREF14 . For the English language, we collected news data from the archives of a leading Indian Newspaper, The Hindu. Data from 2012-2018 under the tags National, International, Sports, Cinema, Television was collected, amounting to 12,600 articles with 200k sentences and around 38k unique words. We use this data to build an English dictionary. Also, we used NLTK's BIBREF13 Named Entity Recognition module on the same data to get a dictionary of Named Entities.",
"We first try to classify all tokens as English, Hindi and NE using these dictionaries. Then, words which are found in both English and Hindi are marked as Ambiguous. The words which do not fall into any of these are classified as OOV.",
"We now identify all possible punning locations in the text. For this, we consider words on the boundaries of language change as candidates for pun locations. Then, all NEs and OOV words are added to the list of pun candidates as well. Third, if any Ambiguous words exist in the text, we consider it once as English and once as Hindi for the next steps.",
"In this step, we contextually lookup all the candidate locations using left context and right context to get a list of all words that may occur at that position. We use bi-gram language models we built using Knesser-Ney smoothing BIBREF15 . We used the data mentioned in the previous step to build the language model for English, and 100k sentences from Hindi monolingual data from BIBREF16 to build the language models for English and Hindi respectively. As it is highly likely that the left and the right context at a pun location belong to different languages, we look at each of those separately instead of taking an intersection of the left and the right context.",
"Lastly, at each pun location, we calculate the similarity of the word at that location with all the words that can occur at that location depending on the context and pick the most similar words as the possible targets.",
"To compare words belonging to two different languages on a phonetic basis, we convert both of them to WX notation BIBREF17 , which denotes a standard way to represent Indian languages in the Roman script. We transliterate our identified Hindi words from Devanagari to WX notation. To convert English words to the same notation, we use the CMU phonetic dictionary , which uses a 39 phoneme set to represent North American pronunciations of English words. We build a mapping between this phoneme set and WX notation. Whenever there was no exact parallel between CMU pronouncing dictionary's notation and WX, we used the word's Indian English pronunciation to find the closest match.",
"Once we converted all to WX notation, we use a modified version of Levenshtein Distance BIBREF18 to find most similar words. In this normalized version of Levenshtein distance, we account for a few features like aspirations (for example, /p/,/ph/) which are non-phonemic in English, vowel elongations, rhyme, same beginning or ending sounds.",
"In case of an OOV word, since it cannot be converted to WX notation due to non-availability of any phonetic transcription, we simply find the words with the least orthographic distance when written in Roman script, using a similar measure as used for phonetic distance with a few more normalizations (for example, considering 'w' and 'v' as similar)."
],
[
"We test the model explained in the previous section on our test dataset described in Section 3.2 and note that this method is correctly able to recover targets for 34 out of these 51 puns, or around 67% of the puns, which are very encouraging results for this complex task. Examples where the system performed successfully are given in Table 3 .",
"We do a thorough error analysis below for the cases our method fails for."
],
[
"To conclude, in this paper, we present a first-ever work on target recovery code-mixed puns. We study various puns where the word-play is a result of code-switching, and classify them into 2 categories - puns with intra-sentential code mixing and those with intra-word code mixing. We then propose a methodology to recover the targets for puns belonging to the former category, using only monolingual language data. We test our proposed approach on a small manually annotated dataset, and we see that our system was able to successfully recover 67% of the puns from the set.",
"In the future, we want to perform a more comprehensive evaluation of this approach on a larger, more diverse set of puns. We want to improve and extend our approach to be able to recover intra-word code-mixed puns along with the intra-sentential ones that it handles right now. After that, the system should be extended to be able to recover all kinds of puns in code-mixed language, regardless of whether the pun itself is monolingual or code-mixed."
],
[
"We thank the anonymous reviewers for their comments that helped improve this paper."
]
]
}
|
{
"question": [
"What are puns?",
"What are the categories of code-mixed puns?"
],
"question_id": [
"bcd6befa65cab3ffa6334c8ecedd065a4161028b",
"479fc9e6d6d80e69f425d9e82e618e6b7cd12764"
],
"nlp_background": [
"two",
"two"
],
"topic_background": [
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no"
],
"search_query": [
"",
""
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"a form of wordplay jokes in which one sign (e.g. a word or a phrase) suggests two or more meanings by exploiting polysemy, homonymy, or phonological similarity to another sign, for an intended humorous or rhetorical effect"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Puns are a form of wordplay jokes in which one sign (e.g. a word or a phrase) suggests two or more meanings by exploiting polysemy, homonymy, or phonological similarity to another sign, for an intended humorous or rhetorical effect BIBREF3 . Puns where the two meanings share the same pronunciation are known as homophonic or perfect puns, while those relying on similar but non-identical sounding words are known as heterophonic BIBREF4 or imperfect puns BIBREF5 . In this paper, we study automatic target recoverability of English-Hindi code mixed puns - which are more commonly imperfect puns, but may also be perfect puns in some cases."
],
"highlighted_evidence": [
"Puns are a form of wordplay jokes in which one sign (e.g. a word or a phrase) suggests two or more meanings by exploiting polysemy, homonymy, or phonological similarity to another sign, for an intended humorous or rhetorical effect BIBREF3 ."
]
}
],
"annotation_id": [
"eed1806ed0ea6052a8ea8a587cdfb94a67a97256"
],
"worker_id": [
"7fa8d8b1eb8a1630feb99a8e11ebfa501ac5bc3c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"intra-sequential and intra-word"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"With India being a diverse linguistic region, there is an ever increasing usage of code-mixed Hindi-English language (along with various others) because bilingualism and even multilingualism are quite common. Consequently, we have also seen an increase in the usage of code-mixed language in online forums, advertisements etc. Code-mixed humour, especially puns have become increasingly popular because being able to use the same punning techniques but with two languages in play has opened up numerous avenues for new and interesting wordplays. With the increasing popularity and acceptance for the usage of code-mixed language, it has become important that computers are also able to process it and even decipher complex phenomena like humour. Traditional Word Sense Disambiguation (WSD) based methods cannot be used in target recovery of code-mixed puns, because they are no longer about multiple senses of a single word but about two words from two different languages. Code-switching comes with no markers, and the punning word may not even be a word in either of the languages being used. Sometimes words from the two languages can be combined to form a word which only a bilingual speaker would understand. Hence, this task on such data calls for a different set of strategies altogether. We approach this problem in two parts. First, we analyze the types of structures in code-mixed puns and classify them into two categories namely intra-sequential and intra-word. Second, we develop a four stage pipeline to achieve our goal - Language Identification, Pun Candidate Identification, Context Lookup and Phonetic Distance Minimization. We then test our approach on a small dataset and note that our method is successfully able to recover targets for a majority of the puns."
],
"highlighted_evidence": [
" First, we analyze the types of structures in code-mixed puns and classify them into two categories namely intra-sequential and intra-word. "
]
}
],
"annotation_id": [
"0feacb5d838410ce6d4eaba2f4a93f35423da33a"
],
"worker_id": [
"35491e1e579f6d147f4793edce4c1a80ab2410e7"
]
}
]
}
|
{
"caption": [
"Table 2: Examples of intra-word code-mixed puns",
"Table 1: Examples of intra-sentential code-mixed puns",
"Figure 1: This figure illustrates, taking Pun1 as example, our model and the 4 major steps it comprises: 1. Language Identification, 2. Identification of Candidate Pun Locations, 3. Context Lookup and 4. Phonetic Distance minimization.",
"Table 3: Examples of puns successfully recovered by our system",
"Table 5: Example for error case 2, where the pun is based on the pronunciation of an abbreviation.",
"Table 6: Example for error case 3, where the target does not exist in the language model."
],
"file": [
"3-Table2-1.png",
"3-Table1-1.png",
"4-Figure1-1.png",
"4-Table3-1.png",
"5-Table5-1.png",
"5-Table6-1.png"
]
}
|
2003.05995
|
CRWIZ: A Framework for Crowdsourcing Real-Time Wizard-of-Oz Dialogues
|
Large corpora of task-based and open-domain conversational dialogues are hugely valuable in the field of data-driven dialogue systems. Crowdsourcing platforms, such as Amazon Mechanical Turk, have been an effective method for collecting such large amounts of data. However, difficulties arise when task-based dialogues require expert domain knowledge or rapid access to domain-relevant information, such as databases for tourism. This will become even more prevalent as dialogue systems become increasingly ambitious, expanding into tasks with high levels of complexity that require collaboration and forward planning, such as in our domain of emergency response. In this paper, we propose CRWIZ: a framework for collecting real-time Wizard of Oz dialogues through crowdsourcing for collaborative, complex tasks. This framework uses semi-guided dialogue to avoid interactions that breach procedures and processes only known to experts, while enabling the capture of a wide variety of interactions. The framework is available at https://github.com/JChiyah/crwiz
|
{
"section_name": [
"Introduction",
"Related Work",
"System Overview",
"Data Collection",
"Data Collection ::: Implementation",
"Data Collection ::: Deployment",
"Data Analysis",
"Data Analysis ::: Subjective Data",
"Data Analysis ::: Single vs Multiple Wizards",
"Data Analysis ::: Limitations",
"Data Analysis ::: Future Work",
"Conclusion",
"Acknowledgements"
],
"paragraphs": [
[
"Recent machine learning breakthroughs in dialogue systems and their respective components have been made possible by training on publicly available large scale datasets, such as ConvAI BIBREF0, bAbI BIBREF1 and MultiWoZ BIBREF2, many of which are collected on crowdsourcing services, such as Amazon Mechanical Turk and Figure-eight. These data collection methods have the benefits of being cost-effective, time-efficient to collect and scalable, enabling the collection of large numbers of dialogues.",
"Where this crowdsourcing method has its limitations is when specific domain expert knowledge is required, rather than general conversation. These tasks include, for example, call centre agents BIBREF3 or clerks with access to a database, as is required for tourism information and booking BIBREF2. In the near future, there will be a demand to extend this to workplace-specific tasks and procedures. Therefore, a method of gathering crowdsourced dialogue data is needed that ensures compliance with such procedures, whilst providing coverage of a wide variety of dialogue phenomena that could be observed in deployment of a trained dialogue system.",
"Wizard-of-Oz data collections in the past have provided such a mechanism. However, these have traditionally not been scalable because of the scarcity of Wizard experts or the expense to train up workers. This was the situation with an initial study reported in BIBREF4, which was conducted in a traditional lab setting and where the Wizard (an academic researcher) had to learn, through training and reading manuals, how best to perform operations in our domain of emergency response.",
"We present the CRWIZ Intelligent Wizard Interface that enables a crowdsourced Wizard to make intelligent, relevant choices without such intensive training by providing a restricted list of valid and relevant dialogue task actions, which changes dynamically based on the context, as the interaction evolves.",
"Prior crowdsourced wizarded data collections have divided the dialogue up into turns and each worker's job consists of one turn utterance generation given a static dialogue context, as in the MultiWoZ dataset BIBREF2. However, this can limit naturalness of the dialogues by restricting forward planning, collaboration and use of memory that humans use for complex multi-stage tasks in a shared dynamic environment/context.",
"Our scenario is such a complex task. Specifically, our scenario relates to using robotics and autonomous systems on an offshore energy platform to resolve an emergency and is part of the EPSRC ORCA Hub project BIBREF5. The ORCA Hub vision is to use teams of robots and autonomous intelligent systems to work on offshore energy platforms to enable cheaper, safer and more efficient working practices. An important part of this is ensuring safety of robots in complex, dynamic and cluttered environments, co-operating with remote operators. With this data collection method reported here, we aim to automate a conversational Intelligent Assistant (Fred), who acts as an intermediary between the operator and the multiple robotic systems BIBREF6, BIBREF7. Emergency response is clearly a high-stakes situation, which is difficult to emulate in a lab or crowdsourced data collection environment. Therefore, in order to foster engagement and collaboration, the scenario was gamified with a monetary reward given for task success.",
"In this paper, we provide a brief survey of existing datasets and describe the CRWIZ framework for pairing crowdworkers and having half of them acting as Wizards by limiting their dialogue options only to relevant and plausible ones, at any one point in the interaction. We then perform a data collection and compare our dataset to a similar dataset collected in a more controlled lab setting with a single Wizard BIBREF4 and discuss the advantages/disadvantages of both approaches. Finally, we present future work. Our contributions are as follows:",
"The release of a platform for the CRWIZ Intelligent Wizard Interface to allow for the collection of dialogue data for longer complex tasks, by providing a dynamic selection of relevant dialogue acts.",
"A survey of existing datasets and data collection platforms, with a comparison to the CRWIZ data collection for Wizarded crowdsourced data in task-based interactions."
],
[
"Table TABREF3 gives an overview of prior work and datasets. We report various factors to compare to the CRWIZ dataset corresponding to columns in Table TABREF3: whether or not the person was aware they were talking to a bot; whether each dialogue had a single or multiple participants per role; whether the data collection was crowdsourced; and the modality of the interaction and the domain. As we see from the bottom row, none of the datasets reported in the table meet all the criteria we are aiming for, exemplifying the need for a new and novel approach.",
"Collecting large amounts of dialogue data can be very challenging as two interlocutors are required to create a conversation. If one of the partners in the conversation is a machine as in BIBREF0, the challenge becomes slightly easier since only one partner is lacking. However, in most cases these datasets are aimed at creating resources to train the conversational system itself. Self-authoring the dialogues BIBREF16 or artificially creating data BIBREF1 could be a solution to rapidly collect data, but this solution has been shown to produce low quality unnatural data BIBREF17.",
"One way to mitigate the necessity of pairing two users simultaneously is to allow several participants to contribute to the dialogue, one turn at the time. This approach has been used both in task-oriented BIBREF10, BIBREF2, BIBREF9 and chitchat BIBREF17. This means that the same dialogue can be authored by several participants. However, this raises issues in terms of coherence and forward-planning. These can be addressed by carefully designing the data collection to provide the maximum amount of information to the participants (e.g. providing the task, personality traits of the bot, goals, etc.) but then this adds to cognitive load, time, cost and participant fatigue.",
"Pairing is a valid option, which has been used in a number of recent data collections in various domains, such as navigating in a city BIBREF13, playing a negotiation game BIBREF14, talking about a person BIBREF18, playing an image game BIBREF8 or having a chat about a particular image that is shown to both participants BIBREF21, BIBREF22. Pairing frameworks exist such as Slurk BIBREF23. Besides its pairing management feature, Slurk is designed in order to allow researchers to modify it and implement their own data collection rapidly.",
"The scenarios for the above-mentioned data collections are mostly intuitive tasks that humans do quite regularly, unlike our use-case scenario of emergency response. Role playing is one option. For example, recent work has tried to create datasets for non-collaborative scenarios BIBREF24, BIBREF25, requesting participants to incarnate a particular role during the data collection. This is particularly challenging when the recruitment is done via a crowdsourcing platform. In BIBREF25, the motivation for the workers to play the role is intrinsic to the scenario. In this data collection, one of the participants tries to persuade their partner to contribute to a charity with a certain amount of money. As a result of their dialogue, the money that the persuadee committed to donate was actually donated to a charity organising. However, for scenarios such as ours, the role playing requires a certain expertise and it is questionable whether the desired behaviour would be achieved simply by letting two non-experts converse with free text.",
"Therefore, in recent data collections, there have been a number of attempts to control the data quality in order to produce a desired behaviour. For example, in BIBREF15, the data collection was done with a limited number of subjects who performed the task several days in a row, behaving both as the Wizard and the customer of a travel agency. The same idea was followed in BIBREF12, where a number of participants took part in the data collection over a period of 6 months and, in BIBREF3, BIBREF19 where a limited number of subjects were trained to be the Wizard. This quality control, however, naturally comes with the cost of recruiting and paying these subjects accordingly.",
"The solution we propose in this paper tries to minimise these costs by increasing the pool of Wizards to anyone wanting to collaborate in the data collection, by providing them the necessary guidance to generate the desired dialogue behaviour. This is a valuable solution for collecting dialogues in domains where specific expertise is required and the cost of training capable Wizards is high. We required fine-grained control over the Wizard interface so as to be able to generate more directed dialogues for specialised domains, such as emergency response for offshore facilities. By providing the Wizard with several dialogue options (aside from free text), we guided the conversation and could introduce actions that change an internal system state. This proposes several advantages:",
"A guided dialogue allows for set procedures to be learned and reduces the amount of data needed for a machine learning model for dialogue management to converge.",
"Providing several dialogue options to the Wizard increases the pace of the interaction and allows them to understand and navigate more complex scenarios."
],
[
"The CRWIZ Intelligent Wizard Interface resides on Slurk BIBREF23, an interaction server built for conducting dialogue experiments and data collections. Slurk handles the pairing of participants and provides a basic chat layout amongst other features. Refer to BIBREF23 for more information on the pairing of participants and the original chat layout. Our chat layout remains similar to Slurk with an important difference. In our scenario, we assign each new participant a role (Operator or Wizard) and, depending on this role, the participant sees different game instructions and chat layout schemes. These are illustrated in Figures FIGREF8 and FIGREF11, for the Operator and Wizard respectively. The main components are described in turn below: 1) The Intelligent Wizard Interface; 2) dialogue structure; and 3) system-changing actions.",
"Wizard interface: the interface shown to participants with the Wizard role provides possible actions on the right-hand side of the browser window. These actions could be verbal, such as sending a message, or non-verbal, such as switching on/off a button to activate a robot. Figure FIGREF11 shows this interface with several actions available to be used in our data collection.",
"Dialogue structure: we introduced structured dialogues through a Finite State Machine (FSM) that controls the current dialogue state and offers multiple suitable and relevant state transitions (actions) to the Wizard depending on the point in the interaction, the state of the world and the history. A graph of dialogue states, transitions and utterances is loaded when the system is initialised, and each chat room has its own dialogue state, which changes through actions.",
"The CRWIZ framework is domain-agnostic, but the data collected with it corresponds to the emergency response domain.",
"System-changing actions: actions trigger transitions between the states in the FSM. We differentiate two types of actions:",
"Verbal actions, such as the dialogue options available at that moment. The Wizard can select one of several predefined messages to send, or type their own message if needed. Free text messages do not change the dialogue state in the FSM, so it is important to minimise their use by providing enough dialogue options to the Wizard. Predefined messages can also trigger other associated events such as pop-ups or follow-up non-verbal actions.",
"Non-verbal actions, such as commands to trigger events. These can take any form, but we used buttons to control robots in our data collection.",
"Submitting an action would change the dialogue state in the FSM, altering the set of actions available in the subsequent turn visible to the Wizard. Some dialogue options are only possible at certain states, in a similar way as to how non-verbal actions are enabled or disabled depending on the state. This is reflected in the Wizard interface.",
"The advantage of the CRWIZ framework is that it can easily be adapted to different domains and procedures by simply modifying the dialogue states loaded at initialisation. These files are in YAML format and have a simple structure that defines their NLG templates (the FSM will pick one template at random if there is more than one) and the states that it can transition to. Note, that some further modifications may be necessary if the scenario is a slot-filling dialogue requiring specific information at various stages.",
"Once the dialogue between the participants finishes, they receive a code in the chat, which can then be submitted to the crowdsourcing platform for payment. The CRWIZ framework generates a JSON file in its log folder with all the information regarding the dialogue, including messages sent, FSM transitions, world state at each action, etc. Automatic evaluation metrics and annotations are also appended such as number of turns per participant, time taken or if one of the participants disconnected. Paying the crowdworkers can be done by just checking that there is a dialogue file with the token that they entered."
],
[
"We set up a crowdsourced data collection through Amazon Mechanical Turk, in which two participants chatted with each other in a setting involving an emergency at an offshore facility. As mentioned above, participants had different roles during the interaction: one of them was an Operator of the offshore facility whereas the other one acted as an Intelligent Emergency Assistant. Both of them had the same goal of resolving the emergency and avoiding evacuation at all costs, but they had different functions in the task:",
"The Operator was responsible for the facility and had to give instructions to the Emergency Assistant to perform certain actions, such as deploying emergency robots. Participants in the role of Operator were able to chat freely with no restrictions and were additionally given a map of the facility and a list of available robots (see Figure FIGREF8).",
"The Emergency Assistant had to help the Operator handle the emergency by providing guidance and executing actions. Participants in the role of Emergency Assistant had predefined messages depending on the task progress. They had to choose between one of the options available, depending on which made sense at the time, but they also had the option to write their own message if necessary. The Emergency Assistant role mimics that of the Wizard in a Wizard-of-Oz experiment (see Figure FIGREF11).",
"The participants had a limited time of 6 minutes to resolve the emergency, which consisted of the following sub-tasks: 1) identify and locate the emergency; 2) resolve the emergency; and 3) assess the damage caused. They had four robots available to use with different capabilities: two ground robots with wheels (Husky) and two Quadcopter UAVs (Unmanned Aerial Vehicles). For images of these robots, see Figure FIGREF8. Some robots could inspect areas whereas others were capable of activating hoses, sprinklers or opening valves. Both participants, regardless of their role, had a list with the robots available and their capabilities, but only the Emergency Assistant could control them. This control was through high-level actions (e.g. moving a robot to an area, or ordering the robot to inspect it) that the Emergency Assistant had available as buttons in their interface, as shown in Figure FIGREF11. For safety reasons that might occur in the real world, only one robot could be active doing an action at any time. The combinations of robots and capabilities meant that there was not a robot that could do all three steps of the task mentioned earlier (inspect, resolve and assess damage), but the robots could be used in any order allowing for a variety of ways to resolve the emergency.",
"Participants would progress through the task when certain events were triggered by the Emergency Assistant. For instance, inspecting the area affected by an alarm would trigger the detection of the emergency. After locating the emergency, other dialogue options and commands would open up for the Emergency Assistant. In order to give importance to the milestones in the dialogue, these events were also signalled by GIFs (short animated video snippets) in the chat that both participants could see (e.g. a robot finding a fire), as in Figure FIGREF12. The GIFs were added for several reasons: to increase participant engagement and situation awareness, to aid in the game and to show progress visually. Note that there was no visual stimuli in the original WoZ study BIBREF4 but they were deemed necessary here to help the remote participants contextualise the scenario. These GIFs were produced using a Digital Twin simulation of the offshore facility with the various types of robots. See BIBREF26 for details on the Digital Twin."
],
[
"The dialogue structure for the Emergency Assistant (the Wizard) followed a dialogue flow previously used for the original lab-based Wizard-of-Oz study BIBREF4 but which was slightly modified and simplified for this crowdsourced data collection. In addition to the transitions that the FSM provides, there are other fixed dialogue options always available such as “Hold on, 2 seconds”, “Okay” or “Sorry, can you repeat that?” as a shortcut for commonly used dialogue acts, as well as the option to type a message freely.",
"The dialogue has several paths to reach the same states with varying levels of Operator control or engagement that enriched the heterogeneity of conversations. The Emergency Assistant dialogue options show various speaking styles, with a more assertive tone (“I am sending Husky 1 to east tower”) or others with more collaborative connotations (“Which robot do you want to send?” or “Husky 1 is available to send to east tower”). Refer to BIBREF4 for more details. Furthermore, neither participants were restricted in the number of messages that they could send and we did not require a balanced number of turns between them. However, there were several dialogue transitions that required an answer or authorisation from the Operator, so the FSM would lock the dialogue state until the condition was met. As mentioned earlier, the commands to control the robots are also transitions of the FSM, so they were not always available.",
"The Emergency Assistant interface contains a button to get a hint if they get stuck at any point of the conversation. This hint mechanism, when activated, highlights one of the possible dialogue options or robot buttons. This highlighted transition was based on the observed probability distribution of transitions from BIBREF4 to encourage more collaborative interaction than a single straight answer.",
"As in the real world, robot actions during the task were simulated to take a certain period of time, depending on the robot executing it and the action. The Emergency Assistant had the option to give status updates and progress reports during this period. Several dialogue options were available for the Emergency Assistant whilst waiting. The time that robots would take to perform actions was based on simulations run on a Digital Twin of the offshore facility implemented in Gazebo BIBREF26. Specifically, we pre-simulated typical robot actions, with the robot's progress and position reflected in the Wizard interface with up-to-date dialogue options for the Emergency Assistant. Once the robot signals the end of their action, additional updated dialogue options and actions are available for the Emergency Assistant. This simulation allowed us to collect dialogues with a realistic embedded world state."
],
[
"We used Amazon Mechanical Turk (AMT) for the data collection. We framed the task as a game to encourage engagement and interaction. The whole task, (a Human Intelligence Task (HIT) in AMT) consisted of the following:",
"Reading an initial brief set of instructions for the overall task.",
"Waiting for a partner for a few seconds before being able to start the dialogue.",
"When a partner was found, they were shown the instructions for their assigned role. As these were different, we ensured that they both took around the same time. The instructions had both a text component and a video explaining how to play, select dialogues, robots, etc.",
"Playing the game to resolve the emergency. This part was limited to 6 minutes.",
"Filling a post-task questionnaire about partner collaboration and task ease.",
"The participants received a game token after finishing the game that would allow them to complete the questionnaire and submit the task. This token helped us link their dialogue to the responses from the questionnaire.",
"Several initial pilots helped to define the total time required as 10 minutes for all the steps above. We set the HIT in AMT to last 20 minutes to allow additional time should any issues arise. The pilots also helped setting the payment for the workers. Initially, participants were paid a flat amount of $1.4 per dialogue. However, we found that offering a tiered payment tied to the length of the dialogue and bonus for completing the task was the most successful and cost-effective method to foster engagement and conversation:",
"$0.5 as base for attempting the HIT, reading the instructions and completing the questionnaire.",
"$0.15 per minute during the game, for a maximum of $0.9 for the 6 minutes.",
"$0.2 additional bonus if the participants were able to successfully avoid the evacuation of the offshore facility.",
"The pay per worker was therefore $1.4 for completing a whole dialogue and $1.6 for those who resolved the emergency for a 10-minute HIT. This pay is above the Federal minimum wage in the US ($7.25/hr or $0.12/min) at the time of the experiment.",
"The post-task questionnaire had four questions rated in 7-point rating scales that are loosely based on the PARADISE BIBREF27 questions for spoken dialogue systems:",
"Partner collaboration: “How helpful was your partner?” on a scale of 1 (not helpful at all) to 7 (very helpful).",
"Information ease: “In this conversation, was it easy to get the information that I needed?” on a scale of 1 (no, not at all) to 7 (yes, completely).",
"Task ease: “How easy was the task?” on a scale of 1 (very easy) to 7 (very difficult).",
"User expertise: “In this conversation, did you know what you could say or do at each point of the dialog?” on a scale of 1 (no, not at all) to 7 (yes, completely).",
"At the end, there was also an optional entry to give free text feedback about the task and/or their partner."
],
[
"For the intitial data collection using the CRWIZ platform, 145 unique dialogues were collected (each dialogue consists of a conversation between two participants). All the dialogues were manually checked by one of the authors and those where the workers were clearly not partaking in the task or collaborating were removed from the dataset. The average time per assignment was 10 minutes 47 seconds, very close to our initial estimate of 10 minutes, and the task was available for 5 days in AMT. Out of the 145 dialogues, 14 (9.66%) obtained the bonus of $0.2 for resolving the emergency. We predicted that only a small portion of the participants would be able to resolve the emergency in less than 6 minutes, thus it was framed as a bonus challenge rather than a requirement to get paid. The fastest time recorded to resolve the emergency was 4 minutes 13 seconds with a mean of 5 minutes 8 seconds. Table TABREF28 shows several interaction statistics for the data collected compared to the single lab-based WoZ study BIBREF4."
],
[
"Table TABREF33 gives the results from the post-task survey. We observe, that subjective and objective task success are similar in that the dialogues that resolved the emergency were rated consistently higher than the rest.",
"Mann-Whitney-U one-tailed tests show that the scores of the Emergency Resolved Dialogues for Q1 and Q2 were significantly higher than the scores of the Emergency Not Resolved Dialogues at the 95% confidence level (Q1: $U = 1654.5$, $p < 0.0001$; Q2: $U = 2195$, $p = 0.009$, both $p < 0.05$). This indicates that effective collaboration and information ease are key to task completion in this setting.",
"Regarding the qualitative data, one of the objectives of the Wizard-of-Oz technique was to make the participant believe that they are interacting with an automated agent and the qualitative feedback seemed to reflect this: “The AI in the game was not helpful at all [...]” or “I was talking to Fred a bot assistant, I had no other partner in the game“."
],
[
"In Table TABREF28, we compare various metrics from the dialogues collected with crowdsourcing with the dialogues previously collected in a lab environment for a similar task. Most figures are comparable, except the number of emergency assistant turns (and consequently the total number of turns). To further understand these differences, we have first grouped the dialogue acts in four different broader types: Updates, Actions, Interactions and Requests, and computed the relative frequency of each of these types in both data collections. In addition, Figures FIGREF29 and FIGREF30 show the distribution of the most frequent dialogue acts in the different settings. It is visible that in the lab setting where the interaction was face-to-face with a robot, the Wizard used more Interaction dialogue acts (Table TABREF32). These were often used in context where the Wizard needed to hold the turn while looking for the appropriate prompt or waiting for the robot to arrive at the specified goal in the environment. On the other hand, in the crowdsourced data collection utterances, the situation updates were a more common choice while the assistant was waiting for the robot to travel to the specified goal in the environment.",
"Perhaps not surprisingly, the data shows a medium strong positive correlation between task success and the number of Action type dialogue acts the Wizard performs, triggering events in the world leading to success ($R=0.475$). There is also a positive correlation between task success and the number of Request dialogue acts requesting confirmation before actions ($R=0.421$), e.g., “Which robot do you want to send?”. As Table 3 shows, these are relatively rare but perhaps reflect a level of collaboration needed to further the task to completion. Table TABREF40 shows one of the dialogues collected where the Emergency Assistant continuously engaged with the Operator through these types of dialogue acts.",
"The task success rate was also very different between the two set-ups. In experiments reported in BIBREF4, 96% of the dialogues led to the extinction of the fire whereas in the crowdsourcing setting only 9.66% achieved the same goal. In the crowdsourced setting, the robots were slower moving at realistic speeds unlike the lab setting. A higher bonus and more time for the task might lead to a higher task success rate."
],
[
"It is important to consider the number of available participants ready and willing to perform the task at any one time. This type of crowdsourcing requires two participants to connect within a few minutes of each other to be partnered together. As mentioned above, there were some issues with participants not collaborating and these dialogues had to be discarded as they were not of use."
],
[
"In future work, we want to expand and improve the platform. Dialogue system development can greatly benefit from better ways of obtaining data for rich task-oriented domains such as ours. Part of fully exploiting the potential of crowdsourcing services lies in having readily available tools that help in the generation and gathering of data. One such tool would be a method to take a set of rules, procedures or business processes and automatically convert to a FSM, in a similar way to BIBREF28, ready to be uploaded to the Wizard interface.",
"Regarding quality and coherence, dialogues are particularly challenging to automatically rate. In our data collection, there was not a correct or wrong dialogue option for the messages that the Emergency Assistant sent during the conversation, but some were better than others depending on the context with the Operator. This context is not easily measurable for complex tasks that depend on a dynamic world state. Therefore, we leave to future work automatically measuring dialogue quality through the use of context.",
"The introduction of Instructional Manipulation Checks BIBREF29 before the game to filter out inattentive participants could improve the quality of the data (Crowdworkers are known for performing multiple tasks at once). Goodman2013 also recommend including screening questions that check both attention and language comprehension for AMT participants. Here, there is a balance that needs to be investigated between experience and quality of crowdworkers and the need for large numbers of participants in order to be quickly paired.",
"We are currently exploring using the data collected to train dialogue models for the emergency response domain using Hybrid Code Networks BIBREF30."
],
[
"In conclusion, this paper described a new, freely available tool to collect crowdsourced dialogues in rich task-oriented settings. By exploiting the advantages of both the Wizard-of-Oz technique and crowdsourcing services, we can effortlessly obtain dialogues for complex scenarios. The predefined dialogue options available to the Wizard intuitively guide the conversation and allow the domain to be deeply explored without the need for expert training. These predefined options also reinforce the feeling of a true Wizard-of-Oz experiment, where the participant who is not the Wizard thinks that they are interacting with a non-human agent.",
"As the applications for task-based dialogue systems keep growing, we will see the need for systematic ways of generating dialogue corpora in varied, richer scenarios. This platform aims to be the first step towards the simplification of crowdsourcing data collections for task-oriented collaborative dialogues where the participants are working towards a shared common goal. The code for the platform and the data are also released with this publication."
],
[
"This work was supported by the EPSRC funded ORCA Hub (EP/R026173/1, 2017-2021). Chiyah Garcia's PhD is funded under the EPSRC iCase EP/T517471/1 with Siemens."
]
]
}
|
{
"question": [
"How is dialogue guided to avoid interactions that breach procedures and processes only known to experts?",
"What is meant by semiguided dialogue, what part of dialogue is guided?",
"Is CRWIZ already used for data collection, what are the results?",
"How does framework made sure that dialogue will not breach procedures?"
],
"question_id": [
"bc26eee4ef1c8eff2ab8114a319901695d044edb",
"9c94ff8c99d3e51c256f2db78c34b2361f26b9c2",
"8e9de181fa7d96df9686d0eb2a5c43841e6400fa",
"ff1595a388769c6429423a75b6e1734ef88d3e46"
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
""
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"pairing crowdworkers and having half of them acting as Wizards by limiting their dialogue options only to relevant and plausible ones, at any one point in the interaction"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In this paper, we provide a brief survey of existing datasets and describe the CRWIZ framework for pairing crowdworkers and having half of them acting as Wizards by limiting their dialogue options only to relevant and plausible ones, at any one point in the interaction. We then perform a data collection and compare our dataset to a similar dataset collected in a more controlled lab setting with a single Wizard BIBREF4 and discuss the advantages/disadvantages of both approaches. Finally, we present future work. Our contributions are as follows:",
"Dialogue structure: we introduced structured dialogues through a Finite State Machine (FSM) that controls the current dialogue state and offers multiple suitable and relevant state transitions (actions) to the Wizard depending on the point in the interaction, the state of the world and the history. A graph of dialogue states, transitions and utterances is loaded when the system is initialised, and each chat room has its own dialogue state, which changes through actions."
],
"highlighted_evidence": [
"In this paper, we provide a brief survey of existing datasets and describe the CRWIZ framework for pairing crowdworkers and having half of them acting as Wizards by limiting their dialogue options only to relevant and plausible ones, at any one point in the interaction",
"Dialogue structure: we introduced structured dialogues through a Finite State Machine (FSM) that controls the current dialogue state and offers multiple suitable and relevant state transitions (actions) to the Wizard depending on the point in the interaction, the state of the world and the history. A graph of dialogue states, transitions and utterances is loaded when the system is initialised, and each chat room has its own dialogue state, which changes through actions."
]
}
],
"annotation_id": [
"67953a768253175e8b82edaf51cba6604a936010"
],
"worker_id": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"The Wizard can select one of several predefined messages to send, or type their own message if needed. Free text messages do not change the dialogue state in the FSM, so it is important to minimise their use by providing enough dialogue options to the Wizard."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The solution we propose in this paper tries to minimise these costs by increasing the pool of Wizards to anyone wanting to collaborate in the data collection, by providing them the necessary guidance to generate the desired dialogue behaviour. This is a valuable solution for collecting dialogues in domains where specific expertise is required and the cost of training capable Wizards is high. We required fine-grained control over the Wizard interface so as to be able to generate more directed dialogues for specialised domains, such as emergency response for offshore facilities. By providing the Wizard with several dialogue options (aside from free text), we guided the conversation and could introduce actions that change an internal system state. This proposes several advantages:",
"A guided dialogue allows for set procedures to be learned and reduces the amount of data needed for a machine learning model for dialogue management to converge.",
"Providing several dialogue options to the Wizard increases the pace of the interaction and allows them to understand and navigate more complex scenarios.",
"Dialogue structure: we introduced structured dialogues through a Finite State Machine (FSM) that controls the current dialogue state and offers multiple suitable and relevant state transitions (actions) to the Wizard depending on the point in the interaction, the state of the world and the history. A graph of dialogue states, transitions and utterances is loaded when the system is initialised, and each chat room has its own dialogue state, which changes through actions.",
"The dialogue structure for the Emergency Assistant (the Wizard) followed a dialogue flow previously used for the original lab-based Wizard-of-Oz study BIBREF4 but which was slightly modified and simplified for this crowdsourced data collection. In addition to the transitions that the FSM provides, there are other fixed dialogue options always available such as “Hold on, 2 seconds”, “Okay” or “Sorry, can you repeat that?” as a shortcut for commonly used dialogue acts, as well as the option to type a message freely.",
"The dialogue has several paths to reach the same states with varying levels of Operator control or engagement that enriched the heterogeneity of conversations. The Emergency Assistant dialogue options show various speaking styles, with a more assertive tone (“I am sending Husky 1 to east tower”) or others with more collaborative connotations (“Which robot do you want to send?” or “Husky 1 is available to send to east tower”). Refer to BIBREF4 for more details. Furthermore, neither participants were restricted in the number of messages that they could send and we did not require a balanced number of turns between them. However, there were several dialogue transitions that required an answer or authorisation from the Operator, so the FSM would lock the dialogue state until the condition was met. As mentioned earlier, the commands to control the robots are also transitions of the FSM, so they were not always available."
],
"highlighted_evidence": [
"By providing the Wizard with several dialogue options (aside from free text), we guided the conversation and could introduce actions that change an internal system state. This proposes several advantages:\n\nA guided dialogue allows for set procedures to be learned and reduces the amount of data needed for a machine learning model for dialogue management to converge.\n\nProviding several dialogue options to the Wizard increases the pace of the interaction and allows them to understand and navigate more complex scenarios.",
"Dialogue structure: we introduced structured dialogues through a Finite State Machine (FSM) that controls the current dialogue state and offers multiple suitable and relevant state transitions (actions) to the Wizard depending on the point in the interaction, the state of the world and the history. A graph of dialogue states, transitions and utterances is loaded when the system is initialised, and each chat room has its own dialogue state, which changes through actions.",
"In addition to the transitions that the FSM provides, there are other fixed dialogue options always available such as “Hold on, 2 seconds”, “Okay” or “Sorry, can you repeat that?” as a shortcut for commonly used dialogue acts, as well as the option to type a message freely.",
"The dialogue has several paths to reach the same states with varying levels of Operator control or engagement that enriched the heterogeneity of conversations."
]
}
],
"annotation_id": [
"f0e709e5450f68728ceb216c496d69a43f916281"
],
"worker_id": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Yes, CRWIZ has been used for data collection and its initial use resulted in 145 dialogues. The average time taken for the task was close to the estimate of 10 minutes, 14 dialogues (9.66%) resolved the emergency in the scenario, and these dialogues rated consistently higher in subjective and objective ratings than those which did not resolve the emergency. Qualitative results showed that participants believed that they were interacting with an automated assistant.",
"evidence": [
"For the intitial data collection using the CRWIZ platform, 145 unique dialogues were collected (each dialogue consists of a conversation between two participants). All the dialogues were manually checked by one of the authors and those where the workers were clearly not partaking in the task or collaborating were removed from the dataset. The average time per assignment was 10 minutes 47 seconds, very close to our initial estimate of 10 minutes, and the task was available for 5 days in AMT. Out of the 145 dialogues, 14 (9.66%) obtained the bonus of $0.2 for resolving the emergency. We predicted that only a small portion of the participants would be able to resolve the emergency in less than 6 minutes, thus it was framed as a bonus challenge rather than a requirement to get paid. The fastest time recorded to resolve the emergency was 4 minutes 13 seconds with a mean of 5 minutes 8 seconds. Table TABREF28 shows several interaction statistics for the data collected compared to the single lab-based WoZ study BIBREF4.",
"Data Analysis ::: Subjective Data",
"Table TABREF33 gives the results from the post-task survey. We observe, that subjective and objective task success are similar in that the dialogues that resolved the emergency were rated consistently higher than the rest.",
"Mann-Whitney-U one-tailed tests show that the scores of the Emergency Resolved Dialogues for Q1 and Q2 were significantly higher than the scores of the Emergency Not Resolved Dialogues at the 95% confidence level (Q1: $U = 1654.5$, $p < 0.0001$; Q2: $U = 2195$, $p = 0.009$, both $p < 0.05$). This indicates that effective collaboration and information ease are key to task completion in this setting.",
"Regarding the qualitative data, one of the objectives of the Wizard-of-Oz technique was to make the participant believe that they are interacting with an automated agent and the qualitative feedback seemed to reflect this: “The AI in the game was not helpful at all [...]” or “I was talking to Fred a bot assistant, I had no other partner in the game“.",
"Data Analysis ::: Single vs Multiple Wizards",
"In Table TABREF28, we compare various metrics from the dialogues collected with crowdsourcing with the dialogues previously collected in a lab environment for a similar task. Most figures are comparable, except the number of emergency assistant turns (and consequently the total number of turns). To further understand these differences, we have first grouped the dialogue acts in four different broader types: Updates, Actions, Interactions and Requests, and computed the relative frequency of each of these types in both data collections. In addition, Figures FIGREF29 and FIGREF30 show the distribution of the most frequent dialogue acts in the different settings. It is visible that in the lab setting where the interaction was face-to-face with a robot, the Wizard used more Interaction dialogue acts (Table TABREF32). These were often used in context where the Wizard needed to hold the turn while looking for the appropriate prompt or waiting for the robot to arrive at the specified goal in the environment. On the other hand, in the crowdsourced data collection utterances, the situation updates were a more common choice while the assistant was waiting for the robot to travel to the specified goal in the environment.",
"Perhaps not surprisingly, the data shows a medium strong positive correlation between task success and the number of Action type dialogue acts the Wizard performs, triggering events in the world leading to success ($R=0.475$). There is also a positive correlation between task success and the number of Request dialogue acts requesting confirmation before actions ($R=0.421$), e.g., “Which robot do you want to send?”. As Table 3 shows, these are relatively rare but perhaps reflect a level of collaboration needed to further the task to completion. Table TABREF40 shows one of the dialogues collected where the Emergency Assistant continuously engaged with the Operator through these types of dialogue acts.",
"The task success rate was also very different between the two set-ups. In experiments reported in BIBREF4, 96% of the dialogues led to the extinction of the fire whereas in the crowdsourcing setting only 9.66% achieved the same goal. In the crowdsourced setting, the robots were slower moving at realistic speeds unlike the lab setting. A higher bonus and more time for the task might lead to a higher task success rate.",
"Data Analysis ::: Limitations",
"It is important to consider the number of available participants ready and willing to perform the task at any one time. This type of crowdsourcing requires two participants to connect within a few minutes of each other to be partnered together. As mentioned above, there were some issues with participants not collaborating and these dialogues had to be discarded as they were not of use."
],
"highlighted_evidence": [
"For the intitial data collection using the CRWIZ platform, 145 unique dialogues were collected (each dialogue consists of a conversation between two participants). ",
"The average time per assignment was 10 minutes 47 seconds, very close to our initial estimate of 10 minutes, and the task was available for 5 days in AMT. Out of the 145 dialogues, 14 (9.66%) obtained the bonus of $0.2 for resolving the emergency. We predicted that only a small portion of the participants would be able to resolve the emergency in less than 6 minutes, thus it was framed as a bonus challenge rather than a requirement to get paid. The fastest time recorded to resolve the emergency was 4 minutes 13 seconds with a mean of 5 minutes 8 seconds. Table TABREF28 shows several interaction statistics for the data collected compared to the single lab-based WoZ study BIBREF4.\n\nData Analysis ::: Subjective Data\nTable TABREF33 gives the results from the post-task survey. We observe, that subjective and objective task success are similar in that the dialogues that resolved the emergency were rated consistently higher than the rest.\n\nMann-Whitney-U one-tailed tests show that the scores of the Emergency Resolved Dialogues for Q1 and Q2 were significantly higher than the scores of the Emergency Not Resolved Dialogues at the 95% confidence level (Q1: $U = 1654.5$, $p < 0.0001$; Q2: $U = 2195$, $p = 0.009$, both $p < 0.05$). This indicates that effective collaboration and information ease are key to task completion in this setting.\n\nRegarding the qualitative data, one of the objectives of the Wizard-of-Oz technique was to make the participant believe that they are interacting with an automated agent and the qualitative feedback seemed to reflect this: “The AI in the game was not helpful at all [...]” or “I was talking to Fred a bot assistant, I had no other partner in the game“.\n\nData Analysis ::: Single vs Multiple Wizards\nIn Table TABREF28, we compare various metrics from the dialogues collected with crowdsourcing with the dialogues previously collected in a lab environment for a similar task. Most figures are comparable, except the number of emergency assistant turns (and consequently the total number of turns). To further understand these differences, we have first grouped the dialogue acts in four different broader types: Updates, Actions, Interactions and Requests, and computed the relative frequency of each of these types in both data collections. In addition, Figures FIGREF29 and FIGREF30 show the distribution of the most frequent dialogue acts in the different settings. It is visible that in the lab setting where the interaction was face-to-face with a robot, the Wizard used more Interaction dialogue acts (Table TABREF32). These were often used in context where the Wizard needed to hold the turn while looking for the appropriate prompt or waiting for the robot to arrive at the specified goal in the environment. On the other hand, in the crowdsourced data collection utterances, the situation updates were a more common choice while the assistant was waiting for the robot to travel to the specified goal in the environment.\n\nPerhaps not surprisingly, the data shows a medium strong positive correlation between task success and the number of Action type dialogue acts the Wizard performs, triggering events in the world leading to success ($R=0.475$). There is also a positive correlation between task success and the number of Request dialogue acts requesting confirmation before actions ($R=0.421$), e.g., “Which robot do you want to send?”. As Table 3 shows, these are relatively rare but perhaps reflect a level of collaboration needed to further the task to completion. Table TABREF40 shows one of the dialogues collected where the Emergency Assistant continuously engaged with the Operator through these types of dialogue acts.\n\nThe task success rate was also very different between the two set-ups. In experiments reported in BIBREF4, 96% of the dialogues led to the extinction of the fire whereas in the crowdsourcing setting only 9.66% achieved the same goal. In the crowdsourced setting, the robots were slower moving at realistic speeds unlike the lab setting. A higher bonus and more time for the task might lead to a higher task success rate.\n\nData Analysis ::: Limitations\nIt is important to consider the number of available participants ready and willing to perform the task at any one time. This type of crowdsourcing requires two participants to connect within a few minutes of each other to be partnered together. As mentioned above, there were some issues with participants not collaborating and these dialogues had to be discarded as they were not of use."
]
}
],
"annotation_id": [
"37067c20bb2afc29e9dbc7ddf9e82c1fb7f7f4ad"
],
"worker_id": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"The Wizard can select one of several predefined messages to send, or type their own message if needed. Free text messages do not change the dialogue state in the FSM, so it is important to minimise their use by providing enough dialogue options to the Wizard. Predefined messages can also trigger other associated events such as pop-ups or follow-up non-verbal actions."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Wizard interface: the interface shown to participants with the Wizard role provides possible actions on the right-hand side of the browser window. These actions could be verbal, such as sending a message, or non-verbal, such as switching on/off a button to activate a robot. Figure FIGREF11 shows this interface with several actions available to be used in our data collection.",
"Dialogue structure: we introduced structured dialogues through a Finite State Machine (FSM) that controls the current dialogue state and offers multiple suitable and relevant state transitions (actions) to the Wizard depending on the point in the interaction, the state of the world and the history. A graph of dialogue states, transitions and utterances is loaded when the system is initialised, and each chat room has its own dialogue state, which changes through actions.",
"System-changing actions: actions trigger transitions between the states in the FSM. We differentiate two types of actions:",
"Verbal actions, such as the dialogue options available at that moment. The Wizard can select one of several predefined messages to send, or type their own message if needed. Free text messages do not change the dialogue state in the FSM, so it is important to minimise their use by providing enough dialogue options to the Wizard. Predefined messages can also trigger other associated events such as pop-ups or follow-up non-verbal actions.",
"Non-verbal actions, such as commands to trigger events. These can take any form, but we used buttons to control robots in our data collection.",
"Submitting an action would change the dialogue state in the FSM, altering the set of actions available in the subsequent turn visible to the Wizard. Some dialogue options are only possible at certain states, in a similar way as to how non-verbal actions are enabled or disabled depending on the state. This is reflected in the Wizard interface."
],
"highlighted_evidence": [
"Wizard interface: the interface shown to participants with the Wizard role provides possible actions on the right-hand side of the browser window. These actions could be verbal, such as sending a message, or non-verbal, such as switching on/off a button to activate a robot. ",
"Dialogue structure: we introduced structured dialogues through a Finite State Machine (FSM) that controls the current dialogue state and offers multiple suitable and relevant state transitions (actions) to the Wizard depending on the point in the interaction, the state of the world and the history. A graph of dialogue states, transitions and utterances is loaded when the system is initialised, and each chat room has its own dialogue state, which changes through actions.",
"System-changing actions: actions trigger transitions between the states in the FSM. We differentiate two types of actions:\n\nVerbal actions, such as the dialogue options available at that moment. The Wizard can select one of several predefined messages to send, or type their own message if needed. Free text messages do not change the dialogue state in the FSM, so it is important to minimise their use by providing enough dialogue options to the Wizard. Predefined messages can also trigger other associated events such as pop-ups or follow-up non-verbal actions.\n\nNon-verbal actions, such as commands to trigger events. These can take any form, but we used buttons to control robots in our data collection.\n\nSubmitting an action would change the dialogue state in the FSM, altering the set of actions available in the subsequent turn visible to the Wizard. Some dialogue options are only possible at certain states, in a similar way as to how non-verbal actions are enabled or disabled depending on the state. "
]
}
],
"annotation_id": [
"41e378720c8fbac9cf7c973a8dca6c412c11d07a"
],
"worker_id": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
]
}
]
}
|
{
"caption": [
"Table 1: Comparison of relevant recent works. In order, the columns refer to: the dataset and reference; if the dataset was generated using Wizard-of-Oz techniques; if there was a unique participant per role for the whole dialogue; if the dataset was crowdsourced; the type of interaction modality used; and finally, the type of task or domain that the dataset covers. † The participants were aware that the dialogue was authored by humans. ‡ The participants were volunteers without getting paid.",
"Figure 1: Interface shown to those in the Operator role running on the Slurk interaction server. It has a similar layout to other chat applications with the chat window on the left and a field to send messages at the bottom. The right side is used to display additional information.",
"Figure 2: Interface shown to those in the Emergency Assistant Wizard role running on the Slurk interaction server. The chat window is on the left, with the dialogue options and buttons to control the robots on the right. The chat here shows GIFs that appear to increase engagement and show game progress visually.",
"Figure 3: Some of the GIFs shown during the game. A and B are Husky robots assessing damages and inspecting a fire respectively. C and D show Quadcopter UAVs moving and inspecting an area.",
"Figure 4: Frequency of the top-10 Emergency Assistant dialogue acts in the data collected. There were 40 unique dialogue acts, each with two or more distinct formulations on average. Most of them also had slots to fill with contextual information, such as the name of the robot. Dialogue acts are colour-coded based on 3 main types.",
"Figure 5: Frequency of the top-10 Emergency Assistant dialogue acts in (Lopes et al., 2019).",
"Table 2: Interaction features of the dialogues collected. We compare it with the results of the Wizard-of-Oz experiment in a controlled setting from (Lopes et al., 2019).",
"Table 3: Distribution of the types of dialogue acts in the data collected with CRWIZ, compared with (Lopes et al., 2019).",
"Table 4: Subjective ratings for the post-task survey reporting Mean, Median, Mode and Standard Deviation (SD). Scales were on a 7-point rating scale. “Dialogues Collected” refers to all the dialogues collected after filtering, whereas the other columns are for the dialogues that did not resolved the emergency (“Emergency Not Resolved Dialogues”) and those that did (“Emergency Resolved Dialogues”). Higher is better (Q3 reversed for this table). Highest numbers are bold. * indicates significant differences (p < 0.05, Mann-Whitney-U) between Emergency Resolved and Emergency Not Resolved dialogues.",
"Table 5: Interaction between participants from one of the dialogues collected."
],
"file": [
"3-Table1-1.png",
"4-Figure1-1.png",
"5-Figure2-1.png",
"5-Figure3-1.png",
"6-Figure4-1.png",
"6-Figure5-1.png",
"7-Table2-1.png",
"7-Table3-1.png",
"8-Table4-1.png",
"9-Table5-1.png"
]
}
|
1710.07395
|
Detecting Online Hate Speech Using Context Aware Models
|
In the wake of a polarizing election, the cyber world is laden with hate speech. Context accompanying a hate speech text is useful for identifying hate speech, which however has been largely overlooked in existing datasets and hate speech detection models. In this paper, we provide an annotated corpus of hate speech with context information well kept. Then we propose two types of hate speech detection models that incorporate context information, a logistic regression model with context features and a neural network model with learning components for context. Our evaluation shows that both models outperform a strong baseline by around 3% to 4% in F1 score and combining these two models further improve the performance by another 7% in F1 score.
|
{
"section_name": [
"Introduction",
"Related Works",
"Corpus Overview",
"Annotation Guidelines",
"Annotation Procedure",
"Characteristics in Fox News User Comments corpus",
"Logistic Regression Models",
"Neural Network Models",
"Ensemble Models",
"Evaluation",
"Experimental Results",
"Conclusion"
],
"paragraphs": [
[
"Following a turbulent election season, 2016's cyber world is awash with hate speech. Automatic detection of hate speech has become an urgent need since human supervision is unable to deal with large quantities of emerging texts.",
"Context information, by our definition, is the text, symbols or any other kind of information related to the original text. While intuitively, context accompanying hate speech is useful for detecting hate speech, context information of hate speech has been overlooked in existing datasets and automatic detection models.",
"Online hate speech tends to be subtle and creative, which makes context especially important for automatic hate speech detection. For instance,",
"",
"(1) barryswallows: Merkel would never say NO",
"",
"This comment is posted for the News titled by \"German lawmakers approve 'no means no' rape law after Cologne assaults\". With context, it becomes clear that this comment is a vicious insult towards female politician. However, almost all the publicly available hate speech annotated datasets do not contain context information. BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 .",
"We have created a new dataset consisting of 1528 Fox News user comments, which were taken from 10 complete discussion threads for 10 widely read Fox News articles. It is different from previous datasets from the following two perspectives. First, it preserves rich context information for each comment, including its user screen name, all comments in the same thread and the news article the comment is written for. Second, there is no biased data selection and all comments in each news comment thread were annotated.",
"In this paper, we explored two types of models, feature-based logistic regression models and neural network models, in order to incorporate context information in automatic hate speech detection. First, logistic regression models have been used in several prior hate speech detection studies BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF0 , BIBREF2 , BIBREF9 and various features have been tried including character-level and word-level n-gram features, syntactic features, linguistic features, and comment embedding features. However, all the features were derived from the to-be-classified text itself. In contrast, we experiment with logistic regression models using features extracted from context text as well. Second, neural network models BIBREF10 , BIBREF11 , BIBREF12 have the potential to capture compositional meanings of text, but they have not been well explored for online hate speech detection until recently BIBREF13 . We experiment with neural net models containing separate learning components that model compositional meanings of context information. Furthermore, recognizing unique strengths of each type of models, we build ensemble models of the two types of models. Evaluation shows that context-aware logistic regression models and neural net models outperform their counterparts that are blind with context information. Especially, the final ensemble models outperform a strong baseline system by around 10% in F1-score."
],
[
"Recently, a few datasets with human labeled hate speech have been created, however, most of existing datasets do not contain context information. Due to the sparsity of hate speech in everyday posts, researchers tend to sample candidates from bootstrapping instead of random sampling, in order to increase the chance of seeing hate speech. Therefore, the collected data instances are likely to be from distinct contexts.",
"For instance, in the Primary Data Set described in BIBREF14 and later used by BIBREF9 , 10% of the dataset is randomly selected while the remaining consists of comments tagged by users and editors. BIBREF15 built a balanced data set of 24.5k tweets by selecting from Twitter accounts that claimed to be racist or were deemed racist using their followed news sources. BIBREF5 collected hateful tweets related to the murder of Drummer Lee Rigby in 2013. BIBREF0 provided a corpus of 16k annotated tweets in which 3.3k are labeled as sexist and 1.9k are labeled as racist. They created this corpus by bootstrapping from certain key words ,specific hashtags and certain prolific users. BIBREF16 created a dataset of 9000 human labeled paragraphs that were collected using regular expression matching in order to find hate speech targeting Judaism and Israel. BIBREF7 extracted data instances from instagram that were associated with certain user accounts. BIBREF2 presented a very large corpus containing over 115k Wikipedia comments that include around 37k randomly sampled comments and the remaining 78k comments were selected from Wikipedia blocked comments.",
"Most of existing hate speech detection models are feature-based and use features derived from the target text itself. BIBREF5 experimented with different classification methods including Bayesian Logistic Regression, Random Forest Decision Trees and SVMs, using features such as n-grams, reduced n-grams, dependency paths, and hateful terms. BIBREF0 proposed a logistic regression model using character n-gram features. BIBREF14 used the paragraph2vec for joint modeling of comments and words, then the generated embeddings were used as feature in a logistic regression model. BIBREF9 experimented with various syntactic, linguistic and distributional semantic features including word length, sentence length, part of speech tags, and embedding features, in order to improve performance of logistic regression classifiers. Recently, BIBREF17 surveyed current approaches for hate speech detection, which interestingly also called to attention on modeling context information for resolving difficult hate speech instances."
],
[
"The Fox News User Comments corpus consists of 1528 annotated comments (435 labeled as hateful) that were posted by 678 different users in 10 complete news discussion threads in the Fox News website. The 10 threads were manually selected and represent popular discussion threads during August 2016. All of the comments included in these 10 threads were annotated. The number of comments in each of the 10 threads is roughly equal. Rich context information was kept for each comment, including its user screen name, the comments and their nested structure and the original news article. The data corpus along with annotation guidelines is posted on github."
],
[
"Our annotation guidelines are similar to the guidelines used by BIBREF9 . We define hateful speech to be the language which explicitly or implicitly threatens or demeans a person or a group based upon a facet of their identity such as gender, ethnicity, or sexual orientation. The labeling of hateful speech in our corpus is binary. A comment will be labeled as hateful or non-hateful."
],
[
"We identified two native English speakers for annotating online user comments. The two annotators first discussed and practices before they started annotation. They achieved a surprisingly high Kappa score BIBREF18 of 0.98 on 648 comments from 4 threads. We think that thorough discussions in the training stage is the key for achieving this high inter-agreement. For those comments which annotators disagreed on, we label them as hateful as long as one annotator labeled them as hateful. Then one annotator continued to annotate the remaining 880 comments from the remaining six discussion threads."
],
[
"Hateful comments in the Fox News User Comments Corpus is often subtle, creative and implicit. Therefore, context information is necessary in order to accurately identify such hate speech.",
"The hatefulness of many comments depended on understanding their contexts. For instance,",
"",
"(3) mastersundholm: Just remember no trabjo no cervesa",
"",
"This comment is posted for the news \"States moving to restore work requirements for food stamp recipients\". This comment implies that Latino immigrants abuse the usage of food stamp policy, which is clearly a stereotyping.",
"Many hateful comments use implicit and subtle language, which contain no clear hate indicating word or phrase. In order to recognize such hard cases, we hypothesize that neural net models are more suitable by capturing overall composite meanings of a comment. For instance, the following comment is a typical implicit stereotyping against women.",
"",
"(4) MarineAssassin: Hey Brianne - get in the kitchen and make me a samich. Chop Chop",
"",
"11% of our annotated comments have more than 50 words each. In such long comments, the hateful indicators usually appear in a small region of a comment while the majority of the comment is neutral. For example,",
"",
"(5) TMmckay: I thought ...115 words... Too many blacks winning, must be racist and needs affirmative action to make whites equally win! ",
"",
"Certain user screen names indicate hatefulness, which imply that comments posted by these users are likely to contain hate speech. In the following example, commie is a slur for communists.",
"",
"(6)nocommie11: Blah blah blah. Israel is the only civilized nation in the region to keep the unwashed masses at bay.",
""
],
[
"In logistic regression models, we extract four types of features, word-level and character-level n-gram features as well as two types of lexicon derived features. We extract these four types of features from the target comment first. Then we extract these features from two sources of context texts, specifically the title of the news article that the comment was posted for and the screen name of the user who posted the comment.",
"For logistic regression model implementation, we use l2 loss. We adopt the balanced class weight as described in Scikit learn. Logistic regression model with character-level n-gram features is presented as a strong baseline for comparison since it was shown very effective. BIBREF0 , BIBREF9 ",
"",
"",
"For character level n-grams, we extract character level bigrams, tri-grams and four-grams. For word level n-grams, we extract unigrams and bigrams.",
"Linguistic Inquiry and Word Count, also called LIWC, has been proven useful for text analysis and classification BIBREF19 . In the LIWC dictionary, each word is labeled with several semantic labels. In our experiment, we use the LIWC 2015 dictionary which contain 125 semantic categories. Each word is converted into a 125 dimension LIWC vector, one dimension per semantic category. The LIWC feature vector for a comment or its context is a 125 dimension vector as well, which is the sum of all its words' LIWC vectors.",
"NRC emotion lexicon contains a list of English words that were labeled with eight basic emotions (anger, fear, anticipation, trust, surprise, sadness, joy, and disgust) and sentiment polarities (negative and positive) BIBREF20 . We use NRC emotion lexicon to capture emotion clues in text. Each word is converted into a 10 dimension emotion vector, corresponding to eight emotion types and two polarity labels. The emotion vector for a comment or its context is a 10 dimension vector as well, which is the sum of all its words' emotion vectors.",
"As shown in table TABREF20 , given comment as the only input content, the combination of character n-grams, word n-grams, LIWC feature and NRC feature achieves the best performance. It shows that in addition to character level features, adding more features can improve hate speech detection performance. However, the improvement is limited. Compared with baseline model, the F1 score only improves 1.3%.",
"In contrast, when context information was taken into account, the performance greatly improved. Specifically, after incorporating features extracted from the news title and username, the model performance was improved by around 4% in both F1 score and AUC score. This shows that using additional context based features in logistic regression models is useful for hate speech detection."
],
[
"Our neural network model mainly consists of three parallel LSTM BIBREF21 layers. It has three different inputs, including the target comment, its news title and its username. Comment and news title are encoded into a sequence of word embeddings. We use pre-trained word embeddings in word2vec. Username is encoded into a sequence of characters. We use one-hot encoding of characters.",
"Comment is sent into a bi-directional LSTM with attention mechanism. BIBREF22 . News title and username are sent into a bi-directional LSTM. Note that we did not apply attention mechanism to the neural network models for username and news title because both types of context are relatively short and attention mechanism tends to be useful when text input is long. The three LSTM output layers are concatenated, then connected to a sigmoid layer, which outputs predictions.",
"The number of hidden units in each LSTM used in our model is set to be 100. The recurrent dropout rate of LSTMs is set to 0.2. In addition, we use binary cross entropy as the loss function and a batch size of 128. The neural network models are trained for 30 epochs.",
"As shown in table TABREF21 , given comment as the only input content, the bi-directional LSTM model with attention mechanism achieves the best performance. Note that the attention mechanism significantly improves the hate speech detection performance of the bi-directional LSTM model. We hypothesize that this is because hate indicator phrases are often concentrated in a small region of a comment, which is especially the case for long comments."
],
[
"To study the difference of logistic regression model and neural network model and potentially get performance improvement, we will build and evaluate ensemble models.",
"As shown in table TABREF24 , both ensemble models significantly improved hate speech detection performance. Figure FIGREF28 shows the system prediction results of comments that were labeled as hateful in the dataset. It can be seen that the two models perform differently. We further examined predicted comments and find that both types of models have unique strengths in identifying certain types of hateful comments.",
"The feature-based logistic regression models are capable of making good use of character-level n-gram features, which are powerful in identifying hateful comments that contains OOV words, capitalized words or misspelled words. We provide two examples from the hateful comments that were only labeled by the logistic regression model:",
"",
"(7)kmawhmf:FBLM.",
"",
"Here FBLM means fuck Black Lives Matter. This hateful comment contains only character information which can exactly be made use of by our logistic regression model.",
"",
"(8)SFgunrmn: what a efen loon, but most femanazis are.",
"",
"This comment deliberately misspelled feminazi for femanazis, which is a derogatory term for feminists. It shows that logistic regression model is capable in dealing with misspelling.",
"The LSTM with attention mechanism are suitable for identifying specific small regions indicating hatefulness in long comments. In addition, the neural net models are powerful in capturing implicit hateful language as well. The following are two hateful comment examples that were only identified by the neural net model:",
"",
"(9)freedomscout: @LarJass Many religions are poisonous to logic and truth, that much is true...and human beings still remain fallen human beings even they are Redeemed by the Sacrifice of Jesus Christ. So there's that. But the fallacies of thinking cannot be limited or attributed to religion but to error inherent in human motivation, the motivation to utter self-centeredness as fallen sinful human beings. Nearly all of the world's many religions are expressions of that utter sinful nature...Christianity and Judaism being the sole exceptions.",
"",
"This comment is expressing the stereotyping against religions which are not Christian or Judaism. The hatefulness is concentrated within the two bolded segments.",
"",
"(10)mamahattheridge: blacks Love being victims.",
"In this comment, the four words themselves are not hateful at all. But when combined together, it is clearly hateful against black people."
],
[
"We evaluate our model by 10 fold cross validation using our newly created Fox News User Comments Corpus. Both types of models use the exact same 10 folds of training data and test data. We report experimental results using multiple metrics, including accuracy, precision/recall/F1-score, and accuracy area under curve (AUC)."
],
[
"Table TABREF20 shows the performance of logistic regression models. The first section of table TABREF20 shows the performance of logistic regression models using features extracted from a target comment only. The result shows that the logistic regression model was improved in every metric after adding both word-level n-gram features and lexicon derived features. However, the improvements are moderate.",
"The second section shows the performance of logistic regression models using the four types of features extracted from both a target comment and its contextsThe result shows that the logistic regression model using features extracted from a comment and both types of context achieved the best performance and obtained improvements of 2.8% and 2.5% in AUC score and F1-score respectively.",
"Table TABREF21 shows the performance of neural network models. The first section of table TABREF21 shows the performance of several neural network models that use comments as the only input. The model names are self-explanatory. We can see that the attention mechanism coupled with the bi-directional LSTM neural net greatly improved the online hate speech detection, by 5.7% in AUC score.",
"The second section of table TABREF21 shows performance of the best neural net model (bi-directional LSTM with attention) after adding additional learning components that take context as input. The results show that adding username and news title can both improve model performance. Using news title gives the best F1 score while using both news title and username gives the best AUC score.",
"Table TABREF24 shows performance of ensemble models by combining prediction results of the best context-aware logistic regression model and the best context-aware neural network model. We used two strategies in combining prediction results of two types of models. Specifically, the Max Score Ensemble model made the final decisions based on the maximum of two scores assigned by the two separate models; instead, the Average Score Ensemble model used the average score to make final decisions.",
"We can see that both ensemble models further improved hate speech detection performance compared with using one model only and achieved the best classification performance. Compared with the logistic regression baseline, the Max Score Ensemble model improved the recall by more than 20% with a comparable precision and improved the F1 score by around 10%, in addition, the Average Score Ensemble model improved the AUC score by around 7%."
],
[
"We demonstrated the importance of utilizing context information for online hate speech detection. We first presented a corpus of hateful speech consisting of full threads of online discussion posts. In addition, we presented two types of models, feature-based logistic regression models and neural network models, in order to incorporate context information for improving hate speech detection performance. Furthermore, we show that ensemble models leveraging strengths of both types of models achieve the best performance for automatic online hate speech detection."
]
]
}
|
{
"question": [
"How do they combine the models?",
"What is their baseline?",
"What context do they use?",
"What is their definition of hate speech?",
"What architecture has the neural network?"
],
"question_id": [
"dd2046f5481f11b7639a230e8ca92904da75feed",
"47e6c3e6fcc9be8ca2437f41a4fef58ef4c02579",
"569ad21441e99ae782d325d5f5e1ac19e08d5e76",
"90741b227b25c42e0b81a08c279b94598a25119d",
"1d739bb8e5d887fdfd1f4b6e39c57695c042fa25"
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"topic_background": [
"research",
"research",
"research",
"research",
"research"
],
"paper_read": [
"yes",
"yes",
"yes",
"yes",
"yes"
],
"search_query": [
"",
"",
"",
"",
""
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"maximum of two scores assigned by the two separate models",
"average score"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Table TABREF24 shows performance of ensemble models by combining prediction results of the best context-aware logistic regression model and the best context-aware neural network model. We used two strategies in combining prediction results of two types of models. Specifically, the Max Score Ensemble model made the final decisions based on the maximum of two scores assigned by the two separate models; instead, the Average Score Ensemble model used the average score to make final decisions."
],
"highlighted_evidence": [
"We used two strategies in combining prediction results of two types of models. Specifically, the Max Score Ensemble model made the final decisions based on the maximum of two scores assigned by the two separate models; instead, the Average Score Ensemble model used the average score to make final decisions."
]
}
],
"annotation_id": [
"7ef612cdd857005a8a83a67e33106def49ae2ae6"
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Logistic regression model with character-level n-gram features"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"For logistic regression model implementation, we use l2 loss. We adopt the balanced class weight as described in Scikit learn. Logistic regression model with character-level n-gram features is presented as a strong baseline for comparison since it was shown very effective. BIBREF0 , BIBREF9"
],
"highlighted_evidence": [
" Logistic regression model with character-level n-gram features is presented as a strong baseline for comparison since it was shown very effective."
]
}
],
"annotation_id": [
"91ccfe8a7d811a711c173f065e106c757a88a3e5"
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"title of the news article",
"screen name of the user"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In logistic regression models, we extract four types of features, word-level and character-level n-gram features as well as two types of lexicon derived features. We extract these four types of features from the target comment first. Then we extract these features from two sources of context texts, specifically the title of the news article that the comment was posted for and the screen name of the user who posted the comment."
],
"highlighted_evidence": [
"Then we extract these features from two sources of context texts, specifically the title of the news article that the comment was posted for and the screen name of the user who posted the comment."
]
}
],
"annotation_id": [
"10171a4ff0ace9c172eaff1684142da661bcda82"
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"language which explicitly or implicitly threatens or demeans a person or a group based upon a facet of their identity such as gender, ethnicity, or sexual orientation"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Our annotation guidelines are similar to the guidelines used by BIBREF9 . We define hateful speech to be the language which explicitly or implicitly threatens or demeans a person or a group based upon a facet of their identity such as gender, ethnicity, or sexual orientation. The labeling of hateful speech in our corpus is binary. A comment will be labeled as hateful or non-hateful."
],
"highlighted_evidence": [
"We define hateful speech to be the language which explicitly or implicitly threatens or demeans a person or a group based upon a facet of their identity such as gender, ethnicity, or sexual orientation."
]
}
],
"annotation_id": [
"3c8b80193d34b3bf7d7269793ac848aab86b756c"
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"three parallel LSTM BIBREF21 layers"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Our neural network model mainly consists of three parallel LSTM BIBREF21 layers. It has three different inputs, including the target comment, its news title and its username. Comment and news title are encoded into a sequence of word embeddings. We use pre-trained word embeddings in word2vec. Username is encoded into a sequence of characters. We use one-hot encoding of characters."
],
"highlighted_evidence": [
"Our neural network model mainly consists of three parallel LSTM BIBREF21 layers."
]
}
],
"annotation_id": [
"aa2b02963b992088afdd800a4174a84f80716c2a"
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
}
]
}
|
{
"caption": [
"Table 1: Performance of Logistic Regression Models",
"Table 2: Performance of Neural Network Models",
"Table 3: Performance of Ensemble Models",
"Figure 1: System Prediction Results of Comments that were Annotated as Hateful"
],
"file": [
"5-Table1-1.png",
"5-Table2-1.png",
"6-Table3-1.png",
"6-Figure1-1.png"
]
}
|
1904.02357
|
Plan, Write, and Revise: an Interactive System for Open-Domain Story Generation
|
Story composition is a challenging problem for machines and even for humans. We present a neural narrative generation system that interacts with humans to generate stories. Our system has different levels of human interaction, which enables us to understand at what stage of story-writing human collaboration is most productive, both to improving story quality and human engagement in the writing process. We compare different varieties of interaction in story-writing, story-planning, and diversity controls under time constraints, and show that increased types of human collaboration at both planning and writing stages results in a 10-50% improvement in story quality as compared to less interactive baselines. We also show an accompanying increase in user engagement and satisfaction with stories as compared to our own less interactive systems and to previous turn-taking approaches to interaction. Finally, we find that humans tasked with collaboratively improving a particular characteristic of a story are in fact able to do so, which has implications for future uses of human-in-the-loop systems.
|
{
"section_name": [
"Introduction",
"System Overview",
"Web Interface",
"Model Design",
"Experiments",
"Details",
"Conclusions and Future Work",
"Acknowledgments",
"Demo Video",
"Decoding",
"Training",
"Mechanical Turk Materials"
],
"paragraphs": [
[
"Collaborative human-machine story-writing has had a recent resurgence of attention from the research community BIBREF0 , BIBREF1 . It represents a frontier for AI research; as a research community we have developed convincing NLP systems for some generative tasks like machine translation, but lag behind in creative areas like open-domain storytelling. Collaborative open-domain storytelling incorporates human interactivity for one of two aims: to improve human creativity via the aid of a machine, or to improve machine quality via the aid of a human. Previously existing approaches treat the former aim, and have shown that storytelling systems are not yet developed enough to help human writers. We attempt the latter, with the goal of investigating at what stage human collaboration is most helpful.",
"gordon2009sayanything use an information retrieval based system to write by alternating turns between a human and their system. clark2018mil use a similar turn-taking approach to interactivity, but employ a neural model for generation and allow the user to edit the generated sentence before accepting it. They find that users prefer a full-sentence collaborative setup (vs. shorter fragments) but are mixed with regard to the system-driven approach to interaction. roemmele2017eval experiment with a user-driven setup, where the machine doesn't generate until the user requests it to, and then the user can edit or delete at will. They leverage user-acceptance or rejection of suggestions as a tool for understanding the characteristics of a helpful generation. All of these systems involve the user in the story-writing process, but lack user involvement in the story-planning process, and so they lean on the user's ability to knit a coherent overall story together out of locally related sentences. They also do not allow a user to control the novelty or “unexpectedness” of the generations, which clark2018mil find to be a weakness. Nor do they enable iteration; a user cannot revise earlier sentences and have the system update later generations. We develop a system that allows a user to interact in all of these ways that were limitations in previous systems; it enables involvement in planning, editing, iterative revising, and control of novelty. We conduct experiments to understand which types of interaction are most effective for improving stories and for making users satisfied and engaged. We have two main interfaces that enable human interaction with the computer. There is cross-model interaction, where the machine does all the composition work, and displays three different versions of a story written by three distinct models for a human to compare. The user guides generation by providing a topic for story-writing and by tweaking decoding parameters to control novelty, or diversity. The second interface is intra-model interaction, where a human can select the model to interact with (potentially after having chosen it via cross-model), and can collaborate at all stages to jointly create better stories. The full range of interactions available to a user is: select a model, provide a topic, change diversity of content, collaborate on the planning for the story, and collaborate on the story sentences. It is entirely user-driven, as the users control how much is their own work and how much is the machine's at every stage. It supports revision; a user can modify an earlier part of a written story or of the story plan at any point, and observe how this affects later generations."
],
[
"Figure FIGREF3 shows a diagram of the interaction system. The dotted arrows represent optional user interactions.",
"requires the user to enter a topic, such as “the not so haunted house”, and can optionally vary the diversity used in the Storyline Planner or the Story Writer. Diversity numbers correspond directly to softmax temperatures, which we restrict to a reasonable range, determined empirically. The settings are sent to the Storyline Planner module, which generates a storyline for the story in the form of a sequence of phrases as per the method of yao2018plan. Everything is then sent to the Story Writer, which will return three stories.",
"enables advanced interactions with one story system of the user's choice. The Storyline Planner returns either one storyline phrase or many, and composes the final storyline out of the combination of phrases the system generated, the user has written, and edits the user has made. These are sent to the Story Writer, which returns either a single sentence or a full story as per user's request. The process is flexible and iterative. The user can choose how much or little content they want to provide, edit, or re-generate, and they can return to any step at any time until they decide they are done.",
"To enable interactive flexibility, the system must handle open-domain user input. User input is lower-cased and tokenized to match the model training data via spaCy. Model output is naively detokenized via Moses BIBREF2 based on feedback from users that this was more natural. User input OOV handling is done via WordNet BIBREF3 by recursively searching for hypernyms and hyponyms (in that order) until either an in-vocabulary word is found or until a maximum distance from the initial word is reached. We additionally experimented with using cosine similarity to GloVe vectors BIBREF4 , but found that to be slower and not qualitatively better for this domain."
],
[
"Figure FIGREF10 shows screenshots for both the cross-model and intra-model modes of interaction. Figure FIGREF10 shows that the cross-model mode makes clear the differences between different model generations for the same topic. Figure FIGREF10 shows the variety of interactions a user can take in intra-model interaction, and is annotated with an example-in-action. User inserted text is underlined in blue, generated text that has been removed by the user is in grey strike-through. The refresh symbol marks areas that the user re-generated to get a different sentence (presumably after being unhappy with the first result). As can be seen in this example, minor user involvement can result in a significantly better story."
],
[
"All models for both the Storyline Planner and Story Writer modules are conditional language models implemented with LSTMs based on merity2018regularizing. These are 3-stacked LSTMs that include weight-dropping, weight-tying, variable length back propagation with learning rate adjustment, and Averaged Stochastic Gradient Descent (ASGD). They are trained on the ROC dataset BIBREF5 , which after lowercasing and tokenization has a vocabulary of 38k. Storyline Phrases are extracted as in yao2018plan via the RAKE algorithm BIBREF6 which results in a slightly smaller Storyline vocabulary of 31k. The Storyline Planner does decoding via sampling to encourage creative exploration. The Story Writer has an option to use one or all three systems, all of which decode via beamsearch and are detailed below.",
"The Title-to-Story system is a baseline, which generates directly from topic.",
"The Plan-and-Write system adopts the static model in yao2018plan to use the storyline to supervise story-writing.",
"Plan-and-Revise is a new system that combines the strengths of yao2018plan and holtzman2018learning. It supplements the Plan-and-Write model by training two discriminators on the ROC data and using them to re-rank the LSTM generations to prefer increased creativity and relevance. Thus the decoding objective of this system becomes INLINEFORM0 where INLINEFORM1 is the conditional language model probability of the LSTM, INLINEFORM2 is the discriminator scoring function, and INLINEFORM3 is the learned weight of that discriminator. At each timestep all live beam hypotheses are scored and re-ranked. Discriminator weights are learnt by minimizing Mean Squared Error on the difference between the scores of gold standard and generated story sentences."
],
[
"We experiment with six types of interaction: five variations created by restricting different capabilities of our system, and a sixth turn-taking baseline that mimics the interaction of the previous work BIBREF1 , BIBREF7 . We choose our experiments to address the research questions: What type of interaction is most engaging? Which type results in the best stories? Can a human tasked with correcting for certain weaknesses of a model successfully do so? The variations on interactions that we tested are:",
"We expand experiment 5 to answer the question of whether a human-in-the-loop interactive system can address specific shortcomings of generated stories. We identify three types of weaknesses common to generation systems – Creativity, Relevance, and Causal & Temporal Coherence, and conduct experiments where the human is instructed to focus on improving specifically one of them. The targeted human improvement areas intentionally match the Plan-and-Revise discriminators, so that, if successful, the \"human discriminator\" data can assist in training the machine discriminators. All experiments (save experiment 2, which lets the user pick between models) use the Plan-and-Revise system."
],
[
"We recruit 30 Mechanical Turk workers per experiment (270 unique workers total) to complete story writing tasks with the system. We constrain them to ten minutes of work (five for writing and five for a survey) and provide them with a fixed topic to control this factor across experiments. They co-create a story and complete a questionnaire which asks them to self-report on their engagement, satisfaction, and perception of story quality. For the additional focused error-correction experiments, we instruct Turkers to try to improve the machine-generated stories with regard to the given aspect, under the same time constraints. As an incentive, they are given a small bonus if they are later judged to have succeeded.",
"We then ask a separate set of Turkers to rate the stories for overall quality and the three improvement areas. All ratings are on a five-point scale. We collect two ratings per story, and throw out ratings that disagree by more than 2 points. A total of 11% of ratings were thrown out, leaving four metrics across 241 stories for analysis."
],
[
"We have shown that all levels of human-computer collaboration improve story quality across all metrics, compared to a baseline computer-only story generation system. We have also shown that flexible interaction, which allows the user to return to edit earlier text, improves the specific metrics of creativity and causal-temporal coherence above previous rigid turn-taking approaches. We find that, as well as improving story quality, more interaction makes users more engaged and likely to use the system again. Users tasked with collaborating to improve a specific story quality were able to do so, as judged by independent readers.",
"As the demo system has successfully used an ensemble of collaborative discriminators to improve the same qualities that untrained human users were able to improve even further, this suggests promising future research into human-collaborative stories as training data for new discriminators. It could be used both to strengthen existing discriminators and to develop novel ones, since discriminators are extensible to arbitrarily many story aspects."
],
[
"We thank the anonymous reviewers for their feedback, as well as the members of the PLUS lab for their thoughts and iterative testing. This work is supported by Contract W911NF-15- 1-0543 with the US Defense Advanced Research Projects Agency (DARPA)."
],
[
"The three-minute video demonstrating the interaction capabilities of the system can be viewed at https://youtu.be/-hGd2399dnA. (Same video as linked in the paper footnote)."
],
[
"Default diversity (Softmax Temperature) for Storyline Planner is 0.5, for Story Writer it is None (as beamsearch is used an thus can have but does not require a temperature). Beam size for all Story Writer models is 5. Additionally, Storyline Phrases are constrained to be unique (unless a user duplicates them), and Beamsearch is not normalized by length (both choices determined empirically)."
],
[
"We follow the parameters used in yao2018plan and merity2018regularizing."
],
[
"Following are examples of the materials used in doing Mechanical Turk User Studies. Figure FIGREF37 is an example of the All + Creative focused experiment for story-writing. The instructions per experiment differ across all, but the template is the same. Figure FIGREF38 is the survey for ranking stories across various metrics. This remains constant save that story order was shuffled every time to control for any effects of the order a story was read in."
]
]
}
|
{
"question": [
"How is human interaction consumed by the model?",
"How do they evaluate generated stories?",
"Do they evaluate in other language appart from English?",
"What are the baselines?"
],
"question_id": [
"5c70fdd3d6b67031768d3e28336942e49bf9a500",
"f27502c3ece9ade265389d5ace90ca9ca42b46f3",
"ffb7a12dfe069ab7263bb7dd366817a9d22b8ef2",
"aa4b38f601cc87bf93849245d5f65124da3dc112"
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
""
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"displays three different versions of a story written by three distinct models for a human to compare",
"human can select the model to interact with (potentially after having chosen it via cross-model), and can collaborate at all stages"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"gordon2009sayanything use an information retrieval based system to write by alternating turns between a human and their system. clark2018mil use a similar turn-taking approach to interactivity, but employ a neural model for generation and allow the user to edit the generated sentence before accepting it. They find that users prefer a full-sentence collaborative setup (vs. shorter fragments) but are mixed with regard to the system-driven approach to interaction. roemmele2017eval experiment with a user-driven setup, where the machine doesn't generate until the user requests it to, and then the user can edit or delete at will. They leverage user-acceptance or rejection of suggestions as a tool for understanding the characteristics of a helpful generation. All of these systems involve the user in the story-writing process, but lack user involvement in the story-planning process, and so they lean on the user's ability to knit a coherent overall story together out of locally related sentences. They also do not allow a user to control the novelty or “unexpectedness” of the generations, which clark2018mil find to be a weakness. Nor do they enable iteration; a user cannot revise earlier sentences and have the system update later generations. We develop a system that allows a user to interact in all of these ways that were limitations in previous systems; it enables involvement in planning, editing, iterative revising, and control of novelty. We conduct experiments to understand which types of interaction are most effective for improving stories and for making users satisfied and engaged. We have two main interfaces that enable human interaction with the computer. There is cross-model interaction, where the machine does all the composition work, and displays three different versions of a story written by three distinct models for a human to compare. The user guides generation by providing a topic for story-writing and by tweaking decoding parameters to control novelty, or diversity. The second interface is intra-model interaction, where a human can select the model to interact with (potentially after having chosen it via cross-model), and can collaborate at all stages to jointly create better stories. The full range of interactions available to a user is: select a model, provide a topic, change diversity of content, collaborate on the planning for the story, and collaborate on the story sentences. It is entirely user-driven, as the users control how much is their own work and how much is the machine's at every stage. It supports revision; a user can modify an earlier part of a written story or of the story plan at any point, and observe how this affects later generations."
],
"highlighted_evidence": [
"We have two main interfaces that enable human interaction with the computer. There is cross-model interaction, where the machine does all the composition work, and displays three different versions of a story written by three distinct models for a human to compare. The user guides generation by providing a topic for story-writing and by tweaking decoding parameters to control novelty, or diversity. The second interface is intra-model interaction, where a human can select the model to interact with (potentially after having chosen it via cross-model), and can collaborate at all stages to jointly create better stories."
]
}
],
"annotation_id": [
"81242d85e0fa65a4c36b58e9c50450e5e104b588"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"separate set of Turkers to rate the stories for overall quality and the three improvement areas"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We then ask a separate set of Turkers to rate the stories for overall quality and the three improvement areas. All ratings are on a five-point scale. We collect two ratings per story, and throw out ratings that disagree by more than 2 points. A total of 11% of ratings were thrown out, leaving four metrics across 241 stories for analysis."
],
"highlighted_evidence": [
"We then ask a separate set of Turkers to rate the stories for overall quality and the three improvement areas. All ratings are on a five-point scale. We collect two ratings per story, and throw out ratings that disagree by more than 2 points. A total of 11% of ratings were thrown out, leaving four metrics across 241 stories for analysis."
]
}
],
"annotation_id": [
"11492d733ff04445f586acee9dc35a41feee950e"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"101e19761be8a2b0d37a67a43cde3ca40941e245"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Title-to-Story system"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The Title-to-Story system is a baseline, which generates directly from topic."
],
"highlighted_evidence": [
"The Title-to-Story system is a baseline, which generates directly from topic."
]
}
],
"annotation_id": [
"947a6075aae9a508486f9b3a215f8cdceb02472c"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
}
|
{
"caption": [
"Figure 1: Diagram of human-computer interaction mediated by the the demo system. The dotted arrows represent optional interactions that the user can take. Depending on the set-up, the user may choose to interact with one or all story models.",
"Figure 2: Screenshots of the demo user interface",
"Table 1: User self-reported scores, from 1-5. E: Entertainment value, Q: Quality of Story, S: Satisfaction with Story. Note that the final column Use Again is based on converting “no” to 0, “conditional” to 1, and “yes” to 2.",
"Table 2: Results for all experiments, from 1-5. Best scores per metric are bolded, scores not significantly different (α = 0.1, per Wilcoxon Signed-Rank Test) are starred. C-T stands for Causal-Temporal Coherence, the + experiments are the extensions where the user focuses on improving a particular quality.",
"Table 3: Training parameters for models used in demo.",
"Table 4: Questionnaire for user self-reporting, range 1 to 5 (1 low).",
"Figure 3: Template & Instructions for Writing Stories in the All + Creative experiment."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"4-Table1-1.png",
"4-Table2-1.png",
"7-Table3-1.png",
"7-Table4-1.png",
"8-Figure3-1.png"
]
}
|
1907.02636
|
Collecting Indicators of Compromise from Unstructured Text of Cybersecurity Articles using Neural-Based Sequence Labelling
|
Indicators of Compromise (IOCs) are artifacts observed on a network or in an operating system that can be utilized to indicate a computer intrusion and detect cyber-attacks in an early stage. Thus, they exert an important role in the field of cybersecurity. However, state-of-the-art IOCs detection systems rely heavily on hand-crafted features with expert knowledge of cybersecurity, and require large-scale manually annotated corpora to train an IOC classifier. In this paper, we propose using an end-to-end neural-based sequence labelling model to identify IOCs automatically from cybersecurity articles without expert knowledge of cybersecurity. By using a multi-head self-attention module and contextual features, we find that the proposed model is capable of gathering contextual information from texts of cybersecurity articles and performs better in the task of IOC identification. Experiments show that the proposed model outperforms other sequence labelling models, achieving the average F1-score of 89.0% on English cybersecurity article test set, and approximately the average F1-score of 81.8% on Chinese test set.
|
{
"section_name": [
"Introduction",
"Model",
"Token Embedding Layer",
"Sequence Representation Layer",
"CRF Layer",
"Features",
"Spelling Features",
"Contextual Features",
"Usage of Features",
"Datasets",
"Training Details",
"Results",
"Analysis of Contextual Features",
"Training the Proposed Model with Bilingual Data",
"Conclusions"
],
"paragraphs": [
[
"Indicators of Compromise (IOCs) are forensic artifacts that are used as signs when a system has been compromised by an attacker or infected with a particular piece of malware. To be specific, IOCs are composed of some combinations of virus signatures, IPs, URLs or domain names of botnets, MD5 hashes of attack files, etc. They are frequently described in cybersecurity articles, many of which are written in unstructured text, describing attack tactics, technique and procedures. For example, a snippet from a cybersecurity article is shown in Fig. FIGREF1 . From the text , token “INST.exe” is the name of an executable file of a malicious software, and the file “ntdll.exe” downloaded by “INST.exe” is a malicious file as well. Obviously, these kinds of IOCs can be then utilized for early detection of future attack attempts by using intrusion detection systems and antivirus software, and thus, they exert an important role in the field of cybersecurity. However, with the rapid evolvement of cyber threats, the IOC data are produced at a high volume and velocity every day, which makes it increasingly hard for human to gather and manage them.",
"A number of systems are proposed to help discover and gather malicious information and IOCs from various types of data sources BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . However, most of those systems consist of several components that identify IOCs by using human-crafted features that heavily rely on specific language knowledge such as dependency structure, and they often have to be pre-defined by experts in the field of the cybersecurity. Furthermore, they need a large amount of annotated data used as the training data to train an IOC classifier. Those training data are frequently difficult to be crowed-sourced, because non-experts can hardly distinguish IOCs from those non-malicious IPs or URLs. Thus, it is a time-consuming and laborious task to construct such systems for different languages.",
"In this work, we consider the task of collecting IOCs from cybersecurity articles as a task of sequence labelling of natural language processing (NLP). By applying a sequence labelling model, each token in an unstructured input text is assigned with a label, and tokens assigned with IOC labels are then collected as IOCs. Recently, sequence labelling models have been utilized in many NLP tasks. Huang et al. BIBREF6 proposed using a sequence labelling model based on the bidirectional long short-term memory (LSTM) BIBREF7 for the task of named entity recognition (NER). Chiu et al. BIBREF8 and Lample et al. BIBREF9 proposed integrating LSTM encoders with character embedding and the neural sequence labelling model to achieve a remarkable performance on the task of NER as well as part-of-speech (POS) tagging. Besides, Dernoncourt et al. BIBREF10 and Jiang et al. BIBREF11 proposed applying the neural sequence labelling model to the task of de-identification of medical records.",
"Among the previous studies of the neural sequence labelling task, Zhou el al. BIBREF12 firstly propose using an end-to-end neural sequence labelling model to fully automate the process of IOCs identification. Their model is on the basis of an artificial neural networks (ANN) with bidirectional LSTM and CRF. However, their newly introduced spelling features bring a more extraction of false positives, i.e., tokens that are similar to IOCs but not malicious. In this paper, we further introduce a multi-head self-attention module and contextual features to the ANN model so that the proposed model can perform better in gathering the contextual information from the unstructured text for the task of IOCs identification. Based on the results of our experiments, our proposed approach achieves an average precision of 93.1% and the recall of 85.2% on English cybersecurity article test set, and an average precision of 82.9% and recall of 80.7% on Chinese test set. We further evaluate the proposed model by training the model using both the English dataset and Chinese dataset, which even achieves better performance."
],
[
"Fig. FIGREF2 shows the 3 components (layers) of the proposed neural network architecture."
],
[
"The token embedding layer takes a token as input and outputs its vector representation. As shown in Fig. FIGREF2 , given an input sequence of tokens INLINEFORM0 , the output vector INLINEFORM1 ( INLINEFORM2 ) of each token INLINEFORM3 results from the concatenation of two different types of embeddings: token embedding INLINEFORM4 and the character-based token embeddings INLINEFORM5 , INLINEFORM6 that come from the output of a character-level bi-LSTM encoder."
],
[
"The Sequence Representation Layer takes the sequence of embeddings INLINEFORM0 ( INLINEFORM1 ) as input, and outputs a sequence INLINEFORM2 , where the INLINEFORM3 element of INLINEFORM4 represents the probability that the INLINEFORM5 token has the label INLINEFORM6 .",
"Different from the previous work of sequence labelling in news articles or patient notes BIBREF9 , BIBREF10 , sentences from a cybersecurity report often contain a large number of tokens as well as lists of IOCs with little context, making it much more difficult for LSTM to encode the input sentence correctly. Therefore, instead of the token LSTM layer in BIBREF12 , we propose sequence representation layer that consists of 3 modules, i.e., attention-based Bi-LSTM module, multi-head self-attention module and token feature module.",
"Considering that tokens cannot contribute equally to the representation of the input sequence, we introduce attention mechanism to Bi-LSTM to extract such tokens that are crucial to the meaning of the sentence. Then, we aggregate the representation of those informative words to form the vector of the input sequence. The attention mechanism is similar to the one proposed by Yang et al. BIBREF13 , which is defined as follows: DISPLAYFORM0 ",
"That is to say, we first compute the INLINEFORM0 as a hidden representation of the hidden states of Bi-LSTM INLINEFORM1 for INLINEFORM2 input token, where INLINEFORM3 is obtained by concatenating the INLINEFORM4 hidden states of forward and backward LSTM, i.e., INLINEFORM5 . Then, we measure the importance of the INLINEFORM6 token with a trainable vector INLINEFORM7 and get a normalized importance weight INLINEFORM8 through a softmax function. After that, the sentence vector INLINEFORM9 is computed as a weighted sum of INLINEFORM10 ( INLINEFORM11 ). Here, weight matrix INLINEFORM12 , bias INLINEFORM13 and vector INLINEFORM14 are randomly initialized and jointly learned during the training process. Note that each input sentence merely has one sentence vector INLINEFORM15 as its weighted representation, and INLINEFORM16 is then used as a part of the INLINEFORM17 output of attention-based Bi-LSTM module, where INLINEFORM18 ( INLINEFORM19 ).",
"Motivated by the successful application of self-attention in many NLP tasks BIBREF14 , BIBREF15 , we add a multi-head self-attention module to enhance the embedding of each word with the information of other words in a text adaptively. By means of this, the local text regions where convolution performs carry the global information of text. Following the encoder part of Vaswani et al. BIBREF14 , multi-head self-attention module is composed of a stack of several identical layers, each of which consists of a multi-head self-attention mechanism and two convolutions with kernel size 1. Given the sequence of embeddings INLINEFORM0 as input, and the output is defined as follows: DISPLAYFORM0 ",
"where, INLINEFORM0 , INLINEFORM1 , INLINEFORM2 are parameter matrices for the projections of queries INLINEFORM3 , keys INLINEFORM4 and values INLINEFORM5 in the INLINEFORM6 head, respectively. Here, INLINEFORM7 , INLINEFORM8 and INLINEFORM9 are set as the input sequence INLINEFORM10 ( INLINEFORM11 ). The INLINEFORM12 is then given to the two convolutions and the output of multi-head self-attention INLINEFORM13 ( INLINEFORM14 ) is obtained.",
"Furthermore, we introduce some features to defined IOCs to improve the performance of the proposed model on a very small amount of training data. Here, we define two types of features, i.e., spelling features and contextual features, and map each token INLINEFORM0 ( INLINEFORM1 ) to a feature vector INLINEFORM2 , where INLINEFORM3 is the spelling feature vector and INLINEFORM4 is the contextual feature vector. Note that the values of features are then jointly learned during the process of training. In Section SECREF3 , we will explain the features in more detail.",
"As shown in Fig. FIGREF2 , the vector INLINEFORM0 ( INLINEFORM1 ) is a concatenation of the INLINEFORM2 , INLINEFORM3 and INLINEFORM4 . Each vector INLINEFORM5 is then given to a feed-forward neural network with one hidden layer, which outputs the corresponding probability vector INLINEFORM6 ."
],
[
"We also introduce a CRF layer to output the most likely sequence of predicted labels. The score of a label sequence INLINEFORM0 is defined as the sum of the probabilities of unigram labels and the bigram label transition probabilities: DISPLAYFORM0 ",
"where INLINEFORM0 is a matrix that contains the transition probabilities of two subsequent labels. Vector INLINEFORM1 is the output of the token LSTM layer, and INLINEFORM2 is the probability of label INLINEFORM3 in INLINEFORM4 . INLINEFORM5 is the probability that a token with label INLINEFORM6 is followed by a token with the label INLINEFORM7 . Subsequently, these scores are turned into probabilities of the label sequence by taking a softmax function over all possible label sequences."
],
[
"We extract a vector of features for each tokens of input sequences. In this section, we present each feature category in detail."
],
[
"Since the IOCs tend to follow fixed patterns, we predefined several regular expressions and spelling rules to identify IOCs. For example, to identify a URL, we defined a regular expression INLINEFORM0 and set the value of the URL feature to 1 when the input token matches the regular expression. However, such expressions and spelling rules could introduce false positives, i.e., tokens that have the same spelling patterns as IOCs but are not malicious. In this work, we further introduce the contextual features as described next."
],
[
"IOCs in cybersecurity articles are often described in a predictable way: being connected to a set of contextual keywords BIBREF16 , BIBREF1 . For example, a human user can infer that the word “ntdll.exe” is the name of a malicious file on the basis of the words “download” and “compromised” from the text shown in Fig. FIGREF1 . By analyzing the whole corpus, it is interesting that malicious file names tends to co-occur with words such as \"download\", \"malware\", \"malicious\", etc. In this work, we consider words that can indicate the characteristics of the neighbor words as contextual keywords and develop an approach to generate features from the automatically extracted contextual keywords.",
"Taking the above into account, we introduce the contextual feature vector INLINEFORM0 for a given input token INLINEFORM1 , where the INLINEFORM2 element of INLINEFORM3 is defined as follows: DISPLAYFORM0 ",
" INLINEFORM0 is the frequency of token INLINEFORM1 in the whole corpus, while INLINEFORM2 is the frequency of contextual keyword INLINEFORM3 from the windowed portions of the texts centering on the token INLINEFORM4 in the whole corpus and INLINEFORM5 is the size of window. The set of contextual keywords INLINEFORM6 are automatically extracted from the annotated texts, where each contextual keyword INLINEFORM7 ( INLINEFORM8 ) satisfies the following conditions:",
" INLINEFORM0 , where INLINEFORM1 is the set of manually annotated IOCs and INLINEFORM2 is a the lower bound of the frequency.",
" INLINEFORM0 is not a punctuation or stopword.",
"Note that we extract contextual keywords only from manually annotated data (e.g., training set), while we compute the contextual feature vector in all of the unlabeled data. According to this definition, it is obvious that the dimension of the contextual feature vector is as the same as the number of extracted contextual keywords. The size of window INLINEFORM0 and the lower bound of frequency INLINEFORM1 are then tuned by the validation set."
],
[
"The feature vector for an input token is the concatenation of the token spelling feature vector and the contextaul feature vector. Here, to elucidate the best usage of the feature vector, we evaluate the feature vector by concatenating it at different locations in the proposed model, i.e., the input of the token LSTM layer ( INLINEFORM0 ), the hidden state of the token LSTM ( INLINEFORM1 ), and the output of token LSTM ( INLINEFORM2 ). Among them, to concatenate the feature vector with the LSTM hidden state vector and the sentence vector of attention in the token LSTM layer, as shown in Section SECREF4 , achieved the best performance. We speculate that the features played an important role in the task of IOCs identification and feature vectors near the output layer were able to improve the performance more significantly than those at other locations."
],
[
"For English dataset, we crawl 687 cybersecurity articles from a collection of advanced persistent threats (APT) reports which are published from 2008 to 2018. All of these cybersecurity articles are used to train the English word embedding. Afterwards, we randomly select 370 articles, and manually annotate the IOCs contained in the articles. Among the selected articles, we randomly select 70 articles as the validation set and 70 articles as the test set; the remaining articles are used for training.",
"For Chinese dataset, we crawl 5,427 cybersecurity articles online from 35 cybersecurity blogs which are published from 2001 to 2018. All of these cybersecurity articles are used to train the Chinese word embedding. Afterwards, we randomly select 607 articles, and manually annotate the IOCs contained in the articles. Among the selected articles, we randomly select 122 articles as the validation set and 122 articles as the test set; the remaining articles are used for training.",
"TABLE TABREF20 shows statistics of the datasets. The output labels are annotated with the BIO (which stands for “Begin”, “Inside” and “Outside”) scheme."
],
[
"For pre-trained token embedding, we apply word2vec BIBREF17 to all crawled 687 English APT reports and 5,427 Chinese cybersecurity articles described in Section SECREF21 respectively. The word2vec models are trained with a window size of 8, a minimum vocabulary count of 1, and 15 iterations. The negative sampling number of word2vec is set to 8 and the model type is skip-gram. The dimension of the output token embedding is set to 100.",
"The ANN model is trained with the stochastic gradient descent to update all parameters, i.e., token embedding, character embedding, parameters of Bi-LSTM, weights of sentence attention, weights of multi-head self-attention, token features, and transition probabilities of CRF layers at each gradient step. For regularization, the dropout is applied to the output of each sub layer of the ANN model. Further training details are given below: (a) For attention-based Bi-LSTM module, dimensions of character embedding, hidden states of character-based token embedding LSTM, hidden states of Bi-LSTM, and sentence attention are set to 25, 25, 100 and 100, respectively. For multi-head self-attention module, we employ a stack of 6 multi-head self attention layer, each of which has 4 head and dimension of each head is set to 64. (b) All of the ANN’s parameters are initialized with a uniform distribution ranging from -1 to 1. (c) We train our model with a fixed learning rate of 0.005. The minimum number of epochs for training is set as 30. After the first 30 epochs had been trained, we compute the average F1-score of the validation set by the use of the currently produced model after every epoch had been trained, and stop the training process when the average F1-score of validation set fails to increase during the last ten epochs. We train our model for, if we do not early stop the training process, 100 epochs as the maximum number. (d) We rescale the normalized gradient to ensure that its norm does not exceed 5. (e) The dropout probability is set to 0.5."
],
[
"As shown in TABLE TABREF24 , we report the micro average of precision, recall and F1-score for all 11 types of labels for a baseline as well as the proposed model. As the baseline, we simply judge the input token as IOCs on the basis of the spelling features described in BIBREF12 . As presented in TABLE TABREF24 , the score obtained by the proposed model is clearly higher than the baseline. Here, as described in Section SECREF14 , the sizes of window and lower bounds of frequency for selecting contextual keywords are tuned as 4 and 7 throughout the evaluation of English dataset, and tuned as 3 and 4 throughout the evaluation of Chinese dataset. The number of extracted contextual keywords from the English dataset is 1,328, and from the Chinese dataset is 331.",
"Furthermore, we quantitatively compare our study with other typical works of sequence labelling, i.e., the work of Huang et al. BIBREF6 , the work of Lample et al. BIBREF9 and the work of Rei et al. BIBREF18 . Huang et al. BIBREF6 proposed a bidirectional LSTM model with a CRF layer, including hand-crafted features specialized for the task of sequence labelling. Lample et al. BIBREF9 described a model where the character-level representation was concatenated with word embedding and Rei et al. BIBREF18 improved the model by introducing an attention mechanism to the character-level representations. We train these models by employing the same training set and training parameters as the proposed model. As shown in TABLE TABREF24 , the proposed model obtains the highest precision, recall and F1-score than other models in the task of IOCs extraction. Compared with the second-best model of Lample et al. BIBREF9 , the performance gain of the proposed model on the English dataset is approximately 10.1% of precision and 10.0% of recall. The performance gain of the proposed model on the Chinese dataset is approximately 4.2% of precision and 9.0% of recall.",
"We also quantitatively compare our study with the work of Zhou et al. BIBREF12 , which proposed a bidirectional LSTM model with a CRF layer, including hand-crafted spelling features for the task of IOC identification. As shown in TABLE TABREF24 , the proposed model obtains a slightly higher F1-score on the English dataset and significantly higher F1-score on the Chinese dataset.",
"TABLE TABREF26 compares several examples of correct IOC extraction produced by the proposed model with one by the work of Lample et al. BIBREF9 . In the first example, the model of Lample et al. BIBREF9 fails to identify the malicious URL “http://www7.chrome-up.date/0m5EE”, because the token only appears in the test set and consists of several parts that are uncommon for URLs, such as “www7” and “date”, and thus both the token embedding and the character embedding lack proper information to represent the token as a malicious URL. The proposed model correctly identifies the URL, where the token is defined as a URL by spelling features and is then identified as a malicious URL by the use of the context information. In the second example, the model of Lample et al. BIBREF9 fails to identify token “cr.sh” of the input Chinese text as a malicious file name, while the token is assigned with a correct label by the proposed model. It is mainly because that the token “cr.sh” is defined as a token of file information by spelling features and tends to co-occur with words, “”(download) and “”(mining software). These two words often appear nearby malicious file information and are then extracted as contextual keywords in Section SECREF14 . The token “cr.sh” is then correctly identified as a token of malicious file information by the use of the contextual features."
],
[
"The proposed model provides an intuitive way to inspect the contextual information of each given token. As described in Section SECREF14 , we initialize the contextual features of each given token using the automatically extracted contextual keywords and jointly learn them during the process of training with the whole ANN model. To prove the effectiveness of the contextual features, we visualize the learned weights martix of each contextual keyword of contextual feature and show several examples in Fig. FIGREF28 . Each row of the matrix in each plot indicates the weights of contextual keywords for the given tokens. From this we see which contextual keyword were considered more important to represent the contextual information of the given token. We can see from the matrix in Fig. FIGREF28 that, for the token “spearphshing”, which is an email-spoofing attack method, the contextual keyword “email” has the largest weight. For the malware “SunOrcal”, which drops several malicious executable files, contextual keywords “droppper” and “dropper” have larger weights than other contextual keywords such as “ascii”, “port” and “type”. For non-IOC token “socket”, contextual keywords “gateway” and “port” yield larger weights than other keywords because \"socket\" tends to co-occur with “gateway” and “port”.",
"We further calculate the average weight of each contextual keyword and show the top 10 and bottom 10 largest weighted contextual keywords in TABLE TABREF29 . From this we see that contextual keywords such as, “hash” and “filename”, which tends to co-occur with malicious filenames, have the largest weights for IOCs, while the contextual keywords such as “ascii”, “password” have the largest weights for non-IOCs. Here, it is interesting to find that contextual keyword “dropped” and “droppper”, which tend to co-occur with malicious file information and malwares, yield large weights for IOCs but small weights for non-IOCs. The proposed ANN model benefits from the differences of contextual information between IOCs and non-IOCs that is represented by the contextual features, and thus, achieves better performance than the previous works."
],
[
"Even though security articles are written in different languages, most of the IOCs are written in English, and are described in a similar pattern. Therefore, using multilingual corpora could be a solution for addressing the lack of annotated data, and the performance of the proposed model is expected to be improved by extending the training set. To examine the hypothesis, we ran a number of additional experiments using both the English dataset and Chinese dataset, both of which are described in Section SECREF21 and are not parallel data or comparable data.",
"As pre-trained word embeddings for the bilingual training dataset, we applied a cross-lingual word embedding obtained by the work of Duong el al BIBREF19 , where the English-Chinese cross-lingual dictionary is obtained by simply translating all the English words from English dataset to Chinese and Chinese words from Chinese dataset to English using Google translation. As contextual feature vector, we concatenate the contextual feature vector obtained from English dataset with the contextual feature vector obtained from Chinese dataset. Then we merge the English training set and the Chinese training set into one set and train the proposed model with the merged bilingual training set. TABLE TABREF31 shows that the proposed model trained with the English training set and Chinese training set achieves a small improvement of F1-score on English test set when compared with the model trained with only English training set, and a great improvement of F1-score on Chinese test set when compared with the model trained with only Chinese training set.",
"We compare scores of each label when the proposed model is trained with different training sets in TABLE TABREF32 . When using the English test set, the F1-scores of labels “attack method”, “attack target” and “malware” by the model trained with the English training set and Chinese training set are lower than those scores by the model trained with only the English training set. It is mainly because that tokens of these labels can be written in different languages, which harms the model trained with the bilingual training data set. In contrast, benefiting from the extension of training set, for types of labels that are often written in English, e.g., “domain ”, “file imformation”, “IPv4” and “vlunerability”, the proposed model trained with the English training set and the Chinese training set achieves higher scores than the model trained with only the English training set. When using the Chinese test set, the proposed model trained with the English training set and the Chinese training set obtained a obviously higher F1-scores than the model trained with only the Chinese training set for almost all the types of labels. It is interesting to find that types of labels “e-mail address”, “attack method”, “attacker”, which lack of instances in Chinese training set, show the biggest improvement by using the model trained with the bilingual training set."
],
[
"To conclude, in this paper, we newly introduce a multi-head self-attention module and contextual features to the neural based sequence labelling model, which significantly improved the performance in the task of IOC identification. Based on the evaluation results of our experiments, our proposed model is proved effective on both the English test set and the Chinese test set. We further evaluated the proposed model by training the proposed model using both the English training set and the Chinese training set and compared it with models that are trained with only one training set, where the model trained with the merged bilngual training set performs better.",
"One of our future works is to integrate the contextual embeddings from the bidirectional language model into our proposed model. The pretrained neural language models are proved effective in the sequence labelling models BIBREF26 , BIBREF27 , BIBREF28 . It is expected to improve the performance of the proposed model by integrating both the contextual features and contextual embeddings into the neural sequence labelling model."
]
]
}
|
{
"question": [
"What is used a baseline?",
"What contextual features are used?",
"Where are the cybersecurity articles used in the model sourced from?",
"What type of hand-crafted features are used in state of the art IOC detection systems?"
],
"question_id": [
"08b87a90139968095433f27fc88f571d939cd433",
"ef872807cb0c9974d18bbb886a7836e793727c3d",
"4db3c2ca6ddc87209c31b20763b7a3c1c33387bc",
"63337fd803f6fdd060ebd0f53f9de79d451810cd"
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"topic_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"search_query": [
"",
"",
"",
""
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"As the baseline, we simply judge the input token as IOCs on the basis of the spelling features described in BIBREF12"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"As shown in TABLE TABREF24 , we report the micro average of precision, recall and F1-score for all 11 types of labels for a baseline as well as the proposed model. As the baseline, we simply judge the input token as IOCs on the basis of the spelling features described in BIBREF12 . As presented in TABLE TABREF24 , the score obtained by the proposed model is clearly higher than the baseline. Here, as described in Section SECREF14 , the sizes of window and lower bounds of frequency for selecting contextual keywords are tuned as 4 and 7 throughout the evaluation of English dataset, and tuned as 3 and 4 throughout the evaluation of Chinese dataset. The number of extracted contextual keywords from the English dataset is 1,328, and from the Chinese dataset is 331."
],
"highlighted_evidence": [
"As the baseline, we simply judge the input token as IOCs on the basis of the spelling features described in BIBREF12 ."
]
}
],
"annotation_id": [
"102b5f1010602ad1ea20ccdc52d330557bfc7433"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "The words that can indicate the characteristics of the neighbor words as contextual keywords and generate it from the automatically extracted contextual keywords.",
"evidence": [
"IOCs in cybersecurity articles are often described in a predictable way: being connected to a set of contextual keywords BIBREF16 , BIBREF1 . For example, a human user can infer that the word “ntdll.exe” is the name of a malicious file on the basis of the words “download” and “compromised” from the text shown in Fig. FIGREF1 . By analyzing the whole corpus, it is interesting that malicious file names tends to co-occur with words such as \"download\", \"malware\", \"malicious\", etc. In this work, we consider words that can indicate the characteristics of the neighbor words as contextual keywords and develop an approach to generate features from the automatically extracted contextual keywords."
],
"highlighted_evidence": [
" In this work, we consider words that can indicate the characteristics of the neighbor words as contextual keywords and develop an approach to generate features from the automatically extracted contextual keywords."
]
}
],
"annotation_id": [
"aef28565f179d4c9f16d43c8a36ed736718157fc"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
" from a collection of advanced persistent threats (APT) reports which are published from 2008 to 2018"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"For English dataset, we crawl 687 cybersecurity articles from a collection of advanced persistent threats (APT) reports which are published from 2008 to 2018. All of these cybersecurity articles are used to train the English word embedding. Afterwards, we randomly select 370 articles, and manually annotate the IOCs contained in the articles. Among the selected articles, we randomly select 70 articles as the validation set and 70 articles as the test set; the remaining articles are used for training."
],
"highlighted_evidence": [
"For English dataset, we crawl 687 cybersecurity articles from a collection of advanced persistent threats (APT) reports which are published from 2008 to 2018. "
]
}
],
"annotation_id": [
"e9486a8eb7bfa181261aef55adfe2acf4a011664"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"c3aaa905861aab52233d0a80bb71b8c517cc2e94"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
]
}
|
{
"caption": [
"Fig. 2. ANN model of sequence labeling for IOCs automatic identification",
"TABLE I STATISTICS OF DATASETS (NUMBERS OF TRAINING / VALIDATION / TEST SET)",
"TABLE II EVALUATION RESULTS (MICRO AVERAGE FOR 11 LABELS)",
"TABLE III EXAMPLES OF CORRECT IDENTIFICATION BY THE PROPOSED MODEL",
"Fig. 3. Heatmap of part of contextual features martix in the English dataset",
"TABLE IV TOP 10 AND BOTTOM 10 LARGEST WEIGHTED CONTEXTUAL KEYWORDS OF CONTEXTUAL FEATURE IN THE ENGLISH DATASET",
"TABLE V COMPARISON OF EVALUATION RESULTS WHEN TRAINING THE PROPOSED MODEL WITH DIFFERENT TRAINING SETS (MICRO AVERAGE PRECISION / RECALL / F1-SCORE FOR 11 LABELS)",
"TABLE VI EVALUATION RESULTS FOR EACH LABEL WHEN TRAINING THE PROPOSED MODEL WITH DIFFERENT TRAINING SETS (PRECISION / RECALL / F1-SCORE)"
],
"file": [
"3-Figure2-1.png",
"4-TableI-1.png",
"5-TableII-1.png",
"6-TableIII-1.png",
"6-Figure3-1.png",
"6-TableIV-1.png",
"7-TableV-1.png",
"8-TableVI-1.png"
]
}
|
1605.08675
|
Boosting Question Answering by Deep Entity Recognition
|
In this paper an open-domain factoid question answering system for Polish, RAFAEL, is presented. The system goes beyond finding an answering sentence; it also extracts a single string, corresponding to the required entity. Herein the focus is placed on different approaches to entity recognition, essential for retrieving information matching question constraints. Apart from traditional approach, including named entity recognition (NER) solutions, a novel technique, called Deep Entity Recognition (DeepER), is introduced and implemented. It allows a comprehensive search of all forms of entity references matching a given WordNet synset (e.g. an impressionist), based on a previously assembled entity library. It has been created by analysing the first sentences of encyclopaedia entries and disambiguation and redirect pages. DeepER also provides automatic evaluation, which makes possible numerous experiments, including over a thousand questions from a quiz TV show answered on the grounds of Polish Wikipedia. The final results of a manual evaluation on a separate question set show that the strength of DeepER approach lies in its ability to answer questions that demand answers beyond the traditional categories of named entities.
|
{
"section_name": [
"Introduction",
"RAFAEL",
"Related work",
"System Architecture",
"Knowledge Base Processing",
"Question Analysis",
"Document Retrieval",
"Entity Recognition",
"Mention selection",
"Deep Entity Recognition",
"Entity Library",
"Evaluation",
"Data",
"Automatic Evaluation",
"Results",
"Experiments",
"Final System Evaluation",
"Discussion",
"Conclusions",
"Appendix A: Named Entity Recognition in RAFAEL",
"Acknowledgments"
],
"paragraphs": [
[
"A Question Answering (QA) system is a computer program capable of understanding questions in a natural language, finding answers to them in a knowledge base and providing answers in the same language. So broadly defined task seems very hard; BIBREF0 describes it as AI-Complete, i.e. equivalent to building a general artificial intelligence. Nonetheless, the field has attracted a lot of attention in Natural Language Processing (NLP) community as it provides a way to employ numerous NLP tools in an exploitable end-user system. It has resulted in valuable contributions within TREC competitions BIBREF1 and, quite recently, in a system called IBM Watson BIBREF2 , successfully competing with humans in the task.",
"However, the problem remains far from solved. Firstly, solutions designed for English are not always easily transferable to other languages with more complex syntax rules and less resources available, such as Slavonic. Secondly, vast complexity and formidable hardware requirements of IBM Watson suggest that there is still a room for improvements, making QA systems smaller and smarter.",
"This work attempts to contribute in both of the above areas. It introduces RAFAEL (RApid Factoid Answer Extraction aLgorithm), a complete QA system for Polish language. It is the first QA system designed to use an open-domain plain-text knowledge base in Polish to address factoid questions not only by providing the most relevant sentence, but also an entity, representing the answer itself. The Polish language, as other Slavonic, features complex inflection and relatively free word order, which poses additional challenges in QA. Chapter SECREF2 contains a detailed description of the system architecture and its constituents.",
"In the majority of such systems, designers' attention focus on different aspects of a sentence selection procedure. Herein, a different idea is incorporated, concentrating on an entity picking procedure. It allows to compare fewer sentences, likely to contain an answer. To do that, classical Named Entity Recognition (NER) gets replaced by Deep Entity Recognition. DeepER, introduced in this work, is a generalisation of NER which, instead of assigning each entity to one of several predefined NE categories, assigns it to a WordNet synset.",
"For example, let us consider a question: Which exiled European monarch returned to his country as a prime minister of a republic?. In the classical approach, we recognise the question as concerning a person and treat all persons found in texts as potential answers. Using DeepER, it is possible to limit the search to persons being monarchs, which results in more accurate answers. In particular, we could utilise information that Simeon II (our answer) is a tsar; thanks to WordNet relations we know that it implies being a monarch. DeepER is a generalisation of NER also from another point of view – it goes beyond the classical named entity categories and treats all entities equally. For example, we could answer a question Which bird migrates from the Arctic to the Antarctic and back every year?, although arctic tern is not recognized as NE by NER systems. Using DeepER, we may mark it as a seabird (hence a bird) and include among possible answers. Chapter SECREF3 outlines this approach.",
"The entity recognition process requires an entities library, containing known entities, their text representations (different ways of textual notation) and WordNet synsets, to which they belong. To obtain this information, the program analyses definitions of entries found in encyclopaedia (in this case the Polish Wikipedia). In previous example, it would use a Wikipedia definition: The Arctic Tern (Sterna paradisaea) is a seabird of the tern family Sternidae. This process, involving also redirect and disambiguation pages, is described in section SECREF40 . Next, having all the entities and their names, it suffices to locate their mentions in a text. The task (section SECREF73 ) is far from trivial because of a complicated named entity inflection in Polish (typical for Slavonic languages, see BIBREF3 ).",
"DeepER framework provides also another useful service, i.e. automatic evaluation. Usually QA systems are evaluated by verifying accordance between obtained and actual answer based on a human judgement. Plain string-to-string equality is not enough, as many entities have different text representations, e.g. John F. Kennedy is as good as John Fitzgerald Kennedy and John Kennedy, or JFK (again, the nominal inflection in Polish complicates the problem even more). However, with DeepER, a candidate answer can undergo the same recognition process and be compared to the actual expected entity, not string.",
"Thanks to automatic evaluation vast experiments requiring numerous evaluations may be performed swiftly; saving massive amount of time and human resources. As a test set, authentic questions from a popular Polish quiz TV show are used. Results of experiments, testing (among others) the optimal context length, a number of retrieved documents, a type of entity recognition solution, appear in section SECREF88 .",
"To avoid overfitting, the final system evaluation is executed on a separate test set, previously unused in development, and is checked manually. The results are shown in section SECREF93 and discussed in chapter SECREF6 . Finally, chapter SECREF7 concludes the paper."
],
[
"As stated in previous chapter, RAFAEL is a computer system solving a task of Polish text-based, open-domain, factoid question answering. It means that provided questions, knowledge base and returned answers are expressed in Polish and may belong to any domain. The system analyses the knowledge base, consisting of a set of plain text documents, and returns answers (as concise as possible, e.g. a person name), supplied with information about supporting sentences and documents.",
"What are the kinds of requests that fall into the category of factoid questions? For the purpose of this study, it is understood to include the following types:",
"Although the above list rules out many challenging types of questions, demanding more elaborate answers (e.g. Why was JFK killed?, What is a global warming?, How to build a fence?), it still involves very distinct problems. Although RAFAEL can recognize factoid questions from any of these types and find documents relevant to them (see more in section SECREF18 and BIBREF4 ), its answering capabilities are limited to those requesting single unnamed entities and named entities. In this document, they are called entity questions.",
"The task description here is similar to the TREC competitions and, completed with test data described in section SECREF80 , could play a similar role for Polish QA, i.e. provide a possibility to compare different solutions of the same problem. More information about the task, including its motivation, difficulties and a feasibility study for Polish could be found in BIBREF5 ."
],
[
"The problem of Question Answering is not new to the Polish NLP community (nor working on other morphologically rich languages), but none of studies presented so far coincides with the notion of plain text-based QA presented above.",
"First Polish QA attempts date back to 1985, when BIBREF6 presented a Polish interface to ORBIS database, containing information about the solar system. The database consisted of a set of PROLOG rules and the role of the system (called POLINT) was to translate Polish questions to appropriate queries. Another early solution, presented by BIBREF7 , could only work in a restricted domain (business information).",
"A system dealing with a subset of the TREC tasks was created for Bulgarian by BIBREF8 . His solution answers only three types of questions: Definition, Where-Is and Temporal. He was able to achieve good results with 100 translated TREC questions, using several manually created answer patterns, without NER or any semantic information. Another system for Bulgarian BIBREF9 participated in the CLEF 2005 competition. Its answer extraction module bases on partial grammars, playing a role of patterns for different types of questions. They could answer correctly 37 of 200 questions, of which only 16 belong to the factoid type. Previously the same team BIBREF10 took part in a Bulgarian-English track of the CLEF 2004, in which Bulgarian questions were answered using English texts.",
"A QA solution was also created for Slovene BIBREF11 . The task there is to answer students' questions using databases, spreadsheet files and a web service. Therefore, it differs from the problem discussed above by limited domain (issues related to a particular faculty) and the non-textual knowledge base. Unfortunately, no quantitative results are provided in this work.",
"More recently, several elements of a Polish QA system called Hipisek were presented by BIBREF12 . It bases on a fairly common scheme of transforming a question into a search query and finding the most appropriate sentence, satisfying question constrains. Unfortunately, a very small evaluation set (65 question) and an unspecified knowledge base (gathered by a web crawler) make it difficult to compare the results. In their later works BIBREF13 , BIBREF14 , the team concentrated on spatial reasoning using a knowledge base encoded as a set of predicates.",
"The approach presented by BIBREF15 is the closest to the scope of this work, as it includes analysis of Polish Wikipedia content and evaluation is based on questions translated from a TREC competition. Unfortunately, it heavily relies on a structure of Wikipedia entries, making it impossible to use with an arbitrary textual corpus.",
"A non-standard approach to answer patterns has been proposed by BIBREF16 . In their Czech open-domain QA system they used a set of templates associated with question types, but also presented a method to learn them semi-automatically from search results. BIBREF17 in their Bulgarian QA system concentrated on semantic matching between between a question and a possible answer checked using dependency parsing. However, they provide no data regarding an answering precision of the whole system.",
"The last Polish system worth mentioning has been created by BIBREF18 . Generally, their task, called Open Domain Question Answering (ODQA), resembles what is treated here, but with one major difference. A document is considered an answer; therefore they focus on improving ranking in a document retrieval stage. They have found out that it could benefit from taking nearness of query terms occurrences into account.",
"As some of Slavonic languages lack necessary linguistic tools and resources, only partial solutions of QA problems exist for them, e.g. document retrieval for Macedonian BIBREF19 , question classification for Croatian BIBREF20 or answer validation for Russian BIBREF21 .",
"The idea of DeepER in a nutshell is to improve QA by annotating a text with WordNet synsets using an entity base created by understanding definitions found in encyclopaedia. Parts of this concept have already appeared in the NLP community.",
"A technique of coordinating synsets assigned to a question and a possible answer emerged in a study by BIBREF45 . While a question analysis there seems very similar to this work, entity library (called proper noun ontology) generation differs a lot. The author analysed 1 GB of newswire text and extracted certain expressions, e.g. \"X, such as Y\" implies that Y is an instance of X. Albeit precision of resulting base was not very good (47 per cent for non-people proper names), it led to a substantial improvement of QA performance.",
"The idea of analysing encyclopaedic definitions to obtain this type of information already appeared, but was employed for different applications. For example, BIBREF46 described a method of building a gazetteer by analysing hyperonymy branches of nouns of first sentences in Wikipedia definitions. Unlike in this work, an original synset was replaced by a coarse-grained NER category. Another example of application is a NE recognizer BIBREF47 using words from a definition as additional features for a standard CRF classifier. In their definition analysis only the last word of the first nominal group was used.",
"Other researchers dealt with a task explicitly defined as classifying Wikipedia entries to NER categories. For example BIBREF48 addressed the problem by combining traditional text classification techniques (bag of words) with contexts of entity mentions. Others BIBREF49 thoroughly examined article categories as a potential source of is-a relations in a taxonomy (99 per cent of entries have at least one category). Inhomogeneity of categories turned out as the main problem, dealt with by a heuristic classifier, assigning is-a and not-is-a labels. Categories were also used as features in a NER task BIBREF50 , but it required a set of manually designed patterns to differentiate between categories of different nature.",
"Exploring a correspondence between Wikipedia entries and WordNet synsets found an application in automatic enriching ontologies with encyclopaedic descriptions BIBREF51 . However, only NEs already appearing in the WordNet were considered. The task (solved by bag-of-words similarity) is non-trivial only in case of polysemous words, e.g. which of the meanings of Jupiter corresponds to which Wikipedia article? Others BIBREF52 concentrated on the opposite, i.e. extending the WordNet by NEs that are not there yet by adding titles of entries as instances of synsets corresponding to their common category.",
"Also, some see Wikipedia as an excellent source of high-quality NER training data. Again, it requires to project entries to NE categories. A thorough study of this problem, presented by BIBREF53 , utilizes features extracted from article content (bag of words), categories, keywords, inter-article and inter-language links. A final annotated corpus turns out as good for NER training as a manually annotated gold standard.",
"Finally, some researchers try to generalise NER to other categories, but keep the same machine-learning-based approach. For example, BIBREF54 developed a tagger, assigning words in a text to one of 41 supersenses. Supersenses include NE categories, but also other labels, such as plant, animal or shape. The authors projected word-sense annotations of publicly available corpora to supersenses and applied perceptron-trained Hidden Markov Model for sequence classification, obtaining precision and recall around 77 per cent."
],
[
"A general architectural scheme of RAFAEL (figure FIGREF11 ) has been inspired by similar systems developed for English; for examples see works by BIBREF22 and BIBREF23 .",
"Two of the steps in the diagram concern offline processing of a knowledge base. Firstly, it is indexed by a search engine to ensure efficient searching in further stages (INDEXING). Secondly, it may be annotated using a set of tools (NLP), but this could also happen at an answering stage for selected documents only.",
"After the system receives a question, it gets analysed (QUESTION ANALYSIS) and transformed into a data structure, called question model. One of its constituents, a search query, is used to find a set of documents, which are probably appropriate for the current problem (SEARCH). For each of the documents, all entity mentions compatible with an obtained question type (e.g. monarchs), are extracted (ENTITY RECOGNITION). For each of them, a context is generated (CONTEXT GENERATION). Finally, a distance between a question content and the entity context is computed to asses its relevance (DISTANCE MEASURE). All the mentions and their distance scores are stored and, after no more documents are left, used to select the best match (BEST ENTITY SELECTION). The system returns the entity, supplied with information about a supporting sentence and a document, as an answer."
],
[
"Knowledge base (KB) processing consists of two elements: indexing and annotating. The objective of the first is to create an index for efficient searching using a search engine. In the system, Lucene 3.6 is used to build two separate full-text indices: regular and stemmed using a built-in stemmer for Polish, Stempel BIBREF24 .",
"Secondly, texts go through a cascade of annotation tools, enriching it with the following information:",
"Morphosyntactic interpretations (sets of tags), using Morfeusz 0.82 BIBREF25 ,",
"Tagging (selection of the most probable interpretation), using a transformation-based learning tagger, PANTERA 0.9.1 BIBREF26 ,",
"Syntactic groups (possibly nested) with syntactic and semantic heads, using a rule-based shallow parser Spejd 1.3.7 BIBREF27 with a Polish grammar, including improved version of modifications by BIBREF28 , enabling lemmatisation of nominal syntactic groups,",
"Named entities, using two available tools: NERF 0.1 BIBREF29 and Liner2 2.3 BIBREF30 .",
"All the annotations are stored in a variant of TEI P5 standard, designed for the National Corpus of Polish BIBREF31 . As noted previously, the process of annotating is not indispensable at the stage of offline KB processing; it could be as well executed only on documents returned from the search engine (for example see Webclopedia by BIBREF22 or LASSO by BIBREF23 ). However, since during evaluation experiments the same documents undergo the process hundreds of times, it seems reasonable to process the whole KB only once."
],
[
"The goal of question analysis is to examine a question and extract all the information that suffices for answer finding. A resulting data structure, called question model, contains the following elements:",
"Question type – a description of expected answer type, instructing the system, what type of data could be returned as an answer. It has three levels of specificity:",
"General question type – one of the types of factoid questions, enumerated at the beginning of this chapter,",
"Named entity type – applicable only in case general type equals named entity. Possible values are the following: place, continent, river, lake, mountain, mountain range, island, archipelago, sea, celestial body, country, state, city, nationality, person, first name, last name, band, dynasty, organisation, company, event, date, century, year, period, number, quantity, vehicle, animal, title.",
"Focus synset – applicable in case of entity questions; a WordNet synset, to which a question focus belongs; necessary for DeepER.",
"Search query – used to find possibly relevant documents,",
"Question content – the words from question which are supposed to appear also in context of an answer.",
"The task presented above, called question classification, is an example of text classification with very short texts. It could be tackled by a general-purpose classifier; for example, BIBREF11 used SVMs (Support Vector Machines) for closed-domain Slovene QA system; BIBREF32 employed SNoW (Sparse Network of Winnows) for hierarchical classification of TREC questions. For Polish results are not satisfactory BIBREF4 because of data sparsity.",
"However, sometimes a solution seems quite evident, as part of the question types enforce its structure. For example, when it begins with Who or When, it belongs to person and date question types, respectively. That is why a set of 176 regular expressions (in case of RAFAEL) suffices to deal with them. They match only a subset of questions (36.15 per cent of the training set), but are highly unambiguous (precision of classification equals 95.37 per cent). Nevertheless, some BIBREF33 use solely such patterns, but need a great number of them (1,273).",
"Unfortunately, most of entity questions are ambiguous, i.e. it is not enough to inspect an interrogative pronoun to find an answer type. They may begin with what or which, followed by a question focus. For example, let us consider a question Which russian submarine sank in 2000 with its whole crew?. Its focus (russian submarine) carries information that the question could be answered by a named entity of type vehicle. The whole process of focus analysis is shown in figure FIGREF25 . The first nominal group after a pronoun serves as a possible lexeme name in plWordNet 2.1 BIBREF34 . As long as there are no results, it gets replaced by its semantic head. When a matching lexeme exists in WordNet, a set of all its hypernyms is extracted. If any of the elements in the set correspond to one of the named entity types, this type is recorded in the question model. Otherwise the general question type takes the value unnamed entity. A WordNet-assisted focus analysis was also implemented in one of solutions participating in a TREC competition BIBREF35 .",
"Search query generation is described in the next chapter. The last element of a question model, called question content, contains segments, which are to be compared with texts to find the best answer. It includes all the words of the interrogative sentence except those included in the matched pattern (Which, ?) and the focus (submarine). In our example the following are left: russian, sank, in, 2000, with, its, whole, crew. An entity mention, which context resembles this set, will be selected as an answer (see details in section SECREF33 ).",
"The question analysis stage explained above follows a design presented in previous works BIBREF4 , BIBREF36 , where more details could be found. The major difference lies in result processing – an original synset is not only projected to one of the named entity types, but also recorded as a focus synset in question type, utilised in DeepER to match entity types. In our example, it would only consider submarines as candidate answers."
],
[
"The use of search engines in QA systems is motivated mainly by performance reasons. Theoretically, we could analyse every document in a text base and find the most relevant to our query. However, it would take excessive amount of time to process the documents, majority of which belong to irrelevant domains (839,269 articles in the test set). A search engine is used to speed up the process by selecting a set of documents and limiting any further analysis to them.",
"As described in section SECREF12 , a knowledge base is indexed by Lucene offline. Given a question, we need to create a search query. The problem is that an answer in the knowledge base is probably expressed differently than the question. Hence, a query created directly from words of the question would not yield results, unless using a highly-redundant KB, such as the WWW (for this type of solution see BIBREF37 ). Therefore, some of the query terms should be dropped – based on their low IDF BIBREF38 or more complex heuristics BIBREF23 . On the other hand, the query may be expanded with synonyms BIBREF22 or derived morphological forms BIBREF38 .",
"Finally, we need to address term matching issue – how to compare a query keyword and a text word in a morphologically-rich language, such as Polish? Apart from exact match, it also is possible to use a stemmer or fuzzy queries, available in Lucene (accepting a predefined Levenshtein distance between matching strings).",
"Previous experiments BIBREF36 led to the following query generation procedure:",
"Remove all words matched by a regular expression at the classification stage (What, Which, etc.),",
"Keep a question focus,",
"Connect all the remaining words by OR operator,",
"Use fuzzy term matching strategy with absolute distance equal 3 characters and fixed prefix.",
"Lucene handles a query and yields a ranked document list, of which N first get transferred to further analysis. The influence of value of N on answering performance is evaluated in section SECREF88 ."
],
[
"Having a set of proposed documents and a question type, the next step is to scan them and find all mentions of entities with appropriate types. RAFAEL includes two approaches to the problem: classical Named Entity Recognition (NER) and novel Deep Entity Recognition.",
"Three NERs for Polish are employed: NERF, Liner2 and Quant. NERF BIBREF29 is a tool designed within the project of the National Corpus of Polish and bases on linear-chain conditional random fields (CRF). It recognizes 13 types of NEs, possibly nested (e.g. Warsaw in University of Warsaw). Liner2 BIBREF30 also employs CRFs, but differentiates NEs of 56 types (which could be reduced to 5 for higher precision). Annotation using both of the tools happens offline within the KB preprocessing, so in the currently described stage it suffices to browse the annotations and find matching entities. As the above tools lack recognition of quantitative expressions, a new one has been developed especially for RAFAEL and called Quant. It is able to handle both numbers and quantities (using WordNet) in a variety of notations.",
"Appendix A contains details of implementation of named entity recognition in RAFAEL, including a description of Quant and a mapping between question types and named entity types available in NERF and Liner2. An alternative being in focus of this work, i.e. DeepER approach, is thorougly discussed in chapter SECREF3 .",
"RAFAEL may use any of the two approaches to entity recognition: NER (via NERF, Liner2 and Quant) or novel DeepER; this choice affects its overall performance. Experiments showing precision and recall of the whole system with respect to applied entity recognition technique are demonstrated in section SECREF88 .",
"An entity recognition step is performed within the question answering process and aims at selecting all entity mentions in a given annotated document. Before it begins, the entity library is read into a PATRICIA trie, a very efficient prefix tree. In this structure, every entity name becomes a key for storing a corresponding list of entities.",
"When a document is ready for analysis, it is searched for strings that match any of the keys in the trie. The candidate chunks (sequences of segments) come from three sources:",
"lemmata of words and syntactic groups,",
"sequences of words in surface forms (as they appear in text),",
"sequences of words in base forms (lemmata).",
"The last two techniques are necessary, because a nominal group lemmatisation often fails, especially in case of proper names. Their rich inflection in Polish BIBREF3 means that a nominal suffix of an entity may be hard to predict. Therefore, a chunk is considered to match an entity name if:",
"they share a common prefix,",
"an unmatched suffix in neither of them is longer than 3 characters,",
"the common prefix is longer than the unmatched chunk suffix.",
"Given a list of entity mentions, RAFAEL checks their compatibility with a question model. Two of its constituents are taken into account: a general question type and a synset. An entity mention agrees with NAMED_ENTITY type if its first segment starts with a capital letter and always agrees with UNNAMED_ENTITY. To pass a semantic agreement test, the synset of the question model needs to be a (direct or indirect) hypernym of one of the synsets assigned to the entity. For example, list of synsets assigned to entity Jan III Sobieski contains <król.1> (king), so it matches a question focus <władca.1, panujący.1, hierarcha.2, pan.1> (ruler) through a hypernymy path <władca.1, panujący.1, hierarcha.2, pan.1> INLINEFORM0 <monarcha.1, koronowana głowa.1> (monarch) INLINEFORM1 <król.1>. All the mentions of entities satisfying these conditions are returned for further processing."
],
[
"When a list of entity mentions in a given document is available, we need to decide which of them most likely answers the question. The obvious way to do that is to compare surroundings of every mention with the content of the question. The procedure consists of two steps: context generation and similarity measurement.",
"The aim of a context generation step is to create a set of segments surrounding an entity, to which they are assigned. Without capabilities of full text understanding, two approximate approaches seem legitimate:",
"Sentence-based – for a given entity mention, a sentence in which it appears, serves as a context,",
"Segment-based – for a given entity mention, every segment sequence of length M, containing the entity, is a context.",
"Both of them have some advantages: relying on a single sentence ensures relation between an entity and a context, whereas the latter provides possibility of modifying context length. Obviously, the value of M should be proportional to question (precisely, its content) length.",
"The method of treating sentences as a context has gained most popularity (see work of BIBREF39 ), but a window of fixed size also appears in the literature; for example BIBREF38 used one with M=140 bytes.",
"The context generation is also related to another issue, i.e. anaphoric expressions. Some segments (e.g. this, him, they) may refer to entities that occurred earlier in a text and therefore harm a similarity estimation. It could be tackled by applying anaphora resolution, but a solution for Polish BIBREF40 remains in an early stage. Observations show that the majority of anaphora refer to an entity in a document title, so the problem is partially bypassed by adding a title to a context.",
"An influence of the context generation techniques on final results is shown in section SECREF88 .",
"To measure a similarity between a question content (explained in section SECREF18 ) and an entity context (generated by the procedures in previous section), a Jaccard similarity index BIBREF41 is computed. However, not all word co-occurrences matter equally (e.g. compare this and Honolulu), so word weights are used: INLINEFORM0 ",
"The sets INLINEFORM0 and INLINEFORM1 contain segments in base forms, whereas INLINEFORM2 denotes a weight of an INLINEFORM3 -th base form, equal to its scaled IDF computed on a document set INLINEFORM4 : INLINEFORM5 ",
"The Jaccard index is a popular solution for sentence similarity measurement in QA (for example see a system by BIBREF42 ). In case of selecting relevant documents, cosine measure is also applied. BIBREF18 compared it to Minimal Span Weighting (MSW) and observed that the latter performs better, as it takes into account a distance between matched words. A study of different techniques for sentence similarity assessment could be found in BIBREF39 .",
"At this stage, a large set of pairs of entity mention and its contexts with scores assigned, is available. Which of them answers the question? Choosing the one with the highest score seems an obvious solution, but we could also aggregate scores of different mentions corresponding to the same answer (entity), e.g. compute their sum or mean. However, such experiments did not yield improvement, so RAFAEL returns only a single answer with the highest score.",
"An answer consists of the following elements: an answer string, a supporting sentence, a supporting document and a confidence value (the score). A sentence and a document, in which the best mention appeared, are assumed to support the answer. Thanks to properties of Jaccard similarity, the mention score ranges between 0 for completely unrelated sentences to 1 for practically (ignoring inflection and a word order) the same. Therefore, it may serve as an answer confidence.",
"When no entity mentions satisfying constraints of a question are found, no answer is returned. This type of result could also be used when the best confidence score is below a predefined value; performance of such technique are shown in section SECREF88 . The refusal to answer in case of insufficient confidence plays an important role in Jeopardy!, hence in IBM Watson BIBREF2 , but it was also used to improve precision in other QA systems BIBREF43 ."
],
[
"Deep Entity Recognition procedure is an alternative to applying Named Entity Recognition in QA to find entities matching question constraints. It scans a text and finds words and multi-word expressions, corresponding to entities. However, it does not assign them to one of several NE categories; instead, WordNet synsets are used. Therefore, named entities are differentiated more precisely (e.g. monarchs and athletes) and entities beyond the classical NE categories (e.g. species, events, devices) could also be recognised in a text.",
"It does not seem possible to perform this task relying solely on features extracted from words and surrounding text (as in NER), so it is essential to build an entity library. Such libraries already exist (Freebase, BabelNet, DBpedia or YAGO) and could provide an alternative for DeepER, but they concentrate on English. The task of adaptation of such a base to another language is far from trivial, especially for Slavonic languages with complex NE inflection BIBREF3 . An ontology taking into account Polish inflection (Prolexbase) has been created by BIBREF44 , but it contains only 40,000 names, grouped into 34 types."
],
[
"An entity library for DeepER contains knowledge about entities that is necessary for deep entity recognition. Each of them consists of the following elements (entity #9751, describing the Polish president, Bronisław Komorowski):",
"Main name: Bronisław Komorowski,",
"Other names (aliases): Bronisław Maria Komorowski, Komorowski,",
"Description URL: http://pl.wikipedia.org/wiki/?curid=121267,",
"plWordNet synsets:",
"<podsekretarz1, podsekretarz stanu1, wiceminister1> (vice-minister, undersecretary),",
"<wicemarszałek1> (vice-speaker of the Sejm, the Polish parliament),",
"<polityk1> (politician),",
"<wysłannik1, poseł1, posłaniec2, wysłaniec1, posłannik1> (member of a parliament),",
"<marszałek1> (speaker of the Sejm),",
"<historyk1> (historian),",
"<minister1> (minister),",
"<prezydent1, prezydent miasta1> (president of a city, mayor).",
"A process of entity library extraction is performed offline, before question answering. The library built for deep entity recognition in RAFAEL, based on the Polish Wikipedia (857,952 articles, 51,866 disambiguation pages and 304,823 redirections), contains 809,786 entities with 1,169,452 names (972,592 unique). The algorithm does not depend on any particular feature of Wikipedia, so any corpus containing entity definitions could be used.",
"Figure FIGREF54 shows an exemplary process of converting the first paragraph of a Polish Wikipedia entry, describing former Polish president Lech Wałęsa, into a list of WordNet synsets. First, we omit all unessential parts of the paragraph (1). This includes text in brackets or quotes, but also introductory expressions like jeden z (one of) or typ (type of). Then, an entity name is detached from the text by matching one of definition patterns (2). In the example we can see the most common one, a dash (–). Next, all occurrences of separators (full stops, commas and semicolons) are used to divide the text into separate chunks (3). The following step employs shallow parsing annotation – only nominal groups that appear at the beginning of the chunks are passed on (4). The first chunk that does not fulfil this requirement and all its successors get excluded from further analysis (4.1). Finally, we split the coordination groups and check, whether their lemmas correspond to any lexemes in WordNet (5). If not, the process repeats with the group replaced by its semantic head. In case of polysemous words, only the first word sense (usually the most common) is taken into account.",
"The whole process is more complicated than the simple example shows. Generally, it consists of the following steps:",
"Prepare a corpus – data format and annotation process is the same as for a knowledge base, used in question answering, see section SECREF12 . It differs in scope of page categories, including not only articles, but also disambiguation and redirection pages.",
"For each of article pages, extract the first paragraph and apply readDefinition function. If a resulting entity has a non-empty synset list, add it to the library. If some of the redirection pages point to the entity name, add their names as entity aliases.",
"For each of disambiguation pages, extract all items and apply readDefinition function. If an item refers to an existing entity, extend it with extracted synsets and disambiguation page name. Create a new entity otherwise. Add redirection names as previously.",
"Save the obtained base for future use.",
"Function readDefinition( INLINEFORM0 ) – interprets a definition to assign synsets to an entity. INLINEFORM1 - annotated first paragraph of an encyclopaedic entry INLINEFORM2 - synsets describing an entity INLINEFORM3 := {} INLINEFORM4 := removeInBrackets( INLINEFORM5 ) INLINEFORM6 := removeInQuotes( INLINEFORM7 ) INLINEFORM8 in INLINEFORM9 INLINEFORM10 matches INLINEFORM11 INLINEFORM12 := match( INLINEFORM13 , INLINEFORM14 ).group(2) break INLINEFORM15 := removeDefinitionPrefixes( INLINEFORM16 ) INLINEFORM17 := split( INLINEFORM18 , INLINEFORM19 ) INLINEFORM20 in INLINEFORM21 INLINEFORM22 := firstGroupOrWord( INLINEFORM23 ) isNominal( INLINEFORM24 ) INLINEFORM25 := INLINEFORM26 INLINEFORM27 extractSynsets( INLINEFORM28 ) break INLINEFORM29 ",
"The readDefinition function (shown as algorithm SECREF40 ) analyses a given paragraph of text and extracts a set of synsets, describing an entity, to which it corresponds, as exemplified by figure FIGREF54 . Simplifying, it is done by removing all unnecessary text (in brackets or quotes), splitting it on predefined separators (commas, full stops, semicolons) and applying extractSynsets function with an appropriate stop criterion. The readDefinition makes use of the following elements:",
"removes everything that is between brackets ([], () or {}) from the text (step (1) in figure FIGREF54 ).",
"removes everything between single or double quotes from the text (step (1) in the example).",
"contains patterns of strings separating a defined concept from a definition, e.g. hyphens or dashes (used in step (2) of the example) or jest to (is a).",
"removes expressions commonly prefixing a nominal group, such as jeden z (one of), typ (a type of) or klasa (a class of), not present in the example.",
"a set of three characters that separate parts of a definition: \".\", \",\" and \";\".",
"returns the longest syntactic element (syntactic group or word) starting at the beginning of a chunk (step (4) in the example).",
"decides, whether a chunk is a noun in nominative, a nominal group or a coordination of nominal groups.",
"Function extractSynsets( INLINEFORM0 ) – recursively extracts synsets from a nominal chunk. INLINEFORM1 - a nominal chunk (a syntactic group or a single noun) INLINEFORM2 - WordNet synsets corresponding to INLINEFORM3 INLINEFORM4 := lemmatise( INLINEFORM5 ) inWordNet( INLINEFORM6 ) getLexemes( INLINEFORM7 ).synset(0) isCoordination( INLINEFORM8 ) INLINEFORM9 := {} INLINEFORM10 in INLINEFORM11 INLINEFORM12 := INLINEFORM13 INLINEFORM14 extractSynsets( INLINEFORM15 ) INLINEFORM16 isGroup( INLINEFORM17 ) extractSynsets( INLINEFORM18 .semanticHead) {}",
"The extractSynsets function (shown as algorithm SECREF40 ) accepts a nominal chunk and extracts WordNet synsets, corresponding to it. It operates recursively to dispose any unnecessary chunk elements and find the longest subgroup, having a counterpart in WordNet. It corresponds to step (5) in figure FIGREF54 and uses the following elements:",
"returns a lemma of a nominal group.",
"checks whether a given text corresponds to a lexeme in WordNet.",
"return a list of WordNet lexemes corresponding to a given text.",
"return a synset including a lexeme in a given word sense number.",
"return TRUE iff a given chunk is a coordination group.",
"return TRUE iff a given chunk is a group.",
"is an element of a syntactic group, denoted as a semantic head.",
"A few of design decisions reflected in these procedures require further comment. First of all, they differ a lot from the studies that involve a definition represented with a bag of words BIBREF48 , BIBREF51 , BIBREF53 . Here, a certain definition structure is assumed, i.e. a series of nominal groups divided by separators. What is more, as the full stop belongs to them, the series may continue beyond a single sentence, which has improved recall in preliminary experiments. Availability of a shallow parsing layer and group lemmatisation allows to query WordNet by syntactic groups instead of single nouns, as in work of BIBREF46 . As word order is relatively free in Polish, a nominal group cannot be assumed to end with a noun, like BIBREF47 did. Instead, a semantic head of a group is used.",
"Finally, the problem of lack of word sense disambiguation remains – the line getLexemes( INLINEFORM0 ).synset(0) means that always a synset connected to the first meaning of a lexeme is selected. We assume that it corresponds to the most common meaning, but that is not always the case – in our example at figure FIGREF54 <prezydent.1, prezydent miasta.1> (president of a city, i.e. mayor) precedes <prezydent.2> (president of a country, the obvious meaning). However, it does not have to harm QA performance as far as the question analysis module (section SECREF18 ) functions analogously, e.g. in case of a question beginning with który prezydent... (which president...). Therefore, the decision has been motivated by relatively good performance of this solution in previously performed experiments on question analysis BIBREF36 . It also works in other applications, e.g. gazetteers generation BIBREF46 .",
"To assess quality of the entity library, its content has been compared with synsets manually extracted from randomly selected 100 Wikipedia articles. 95 of them contain a description of an entity in the first paragraph. Among those, DeepER entity library includes 88 (per-entity recall 92.63 per cent). 135 synsets have been manually assigned to those entities, while the corresponding set in library contains 133 items. 106 of them are equal (per-synset precision 79,70 per cent), while 13 differ only by word sense. 16 of manually extracted synsets hove no counterpart in the entity library (per-synset recall 88.15 per cent), which instead includes 14 false synsets."
],
[
"Evaluation of RAFAEL is typical for factoid QA systems: given a knowledge base and and questions, its responses are compared to the expected ones, prepared in advance. Section SECREF80 describes data used in this procedure, whereas section SECREF87 explains how an automatic evaluation is possible without human labour."
],
[
"The Polish Wikipedia serves as a knowledge base. It has been downloaded from a project site as a single database dump at 03.03.2013, from which plain text files have been extracted using Wikipedia Extractor 2.2 script. It means that only plain text is taken into account – without lists, infoboxes, tables, etc. This procedure leads to a corpus with 895,486 documents, containing 168,982,550 segments, which undergo the annotation process, described in section SECREF12 .",
"The questions that are to be answered with the knowledge base come from two separate sets:",
"Development set bases on 1500 (1130 after filtering) questions from a Polish quiz TV show, called Jeden z dziesięciu BIBREF55 . It was involved in previous experiments BIBREF4 , BIBREF36 .",
"Evaluation set bases on an open dataset for Polish QA systems, published by BIBREF56 . It has been gathered from Did you know... column, appearing in the main page of the Polish Wikipedia. It contains 4721 questions, from which 1000 have been analysed, which resulted in 576 satisfying the task constrains, given in chapter SECREF2 .",
"Table TABREF85 shows a distribution of different question types and named entity types in the sets.",
"To each of the questions from both sets some information has been assigned manually. It includes an identification number, an expected answer string, a general question type, a named entity type (if applicable) and an expected source document. Table TABREF86 contains several exemplary questions from the development set.",
"The additional information (question types and expected documents) makes it possible to evaluate only selected modules of the whole QA system. For example, we could test question classification by comparing results against given question types or entity selection by analysing only the relevant document."
],
[
"Thanks to availability of the DeepER entity library, it is possible to automatically perform answer evaluation for all the question types that are recognised by this technique (UNNAMED_ENTITY and NAMED_ENTITY excluding dates, numbers and quantities).",
"Both an expected and obtained answer are represented as short strings, e.g. Bronisław Komorowski. However, it does not suffice to check their exact equality. That is caused by existence of different names for one entity (Bronisław Maria Komorowski or Komorowski), but also rich nominal inflection (Komorowskiego, Komorowskiemu, ...).",
"In fact, we want to compare entities, not names. Hence, deep entity recognition is a natural solution here. To check correctness of an answer, we use it as an input for the recognition process, described in section SECREF73 . Then, it is enough to check whether the expected answer appears in any of lists of names, assigned to the recognized entities. For example, let us consider a question: Kto jest obecnie prezydentem Polski? (Who is the current president of Poland?) with expected answer Bronisław Komorowski and a system answer Komorowski. The DeepER process finds many entities in the string (all the persons bearing this popular surname). One of them is the question goal, hence, has Bronisław Komorowski in its list of names.",
"As the process of entity recognition is imperfect, so is the automatic evaluation. However, it still lets us to notice general trends in answering performance with respect to several factors. Of course, the final evaluation needs to be checked manually."
],
[
"As mentioned in previous section, the results consist of two groups: experiments, showing an influence of some aspects of algorithm on performance, and a final assessment. Both use the Polish Wikipedia as a knowledge base, whereas the questions asked belong to development and evaluation sets, respectively. In this section, recall measures percentage of questions, to which RAFAEL gave any answer, whereas precision denotes percentage of question answered correctly.",
"When analysing results of different entity recognition techniques, we need to remember that they strongly rely on output of the question analysis, which is not perfect. In particular, tests show that 15.65 per cent of questions is assigned to wrong type and 17.81 per cent search results do not include the expected document BIBREF36 . The entity recognition (ER) stage, a focus of this work, is very unlikely to deliver valid answers in these cases. However, as the expected question type and source document are available in question metadata, it is possible to correct results of question analysis by artificially replacing a wrong type and/or adding the expected document to the retrieved set. In that way the ER modules could be evaluated, as if question analysis worked perfectly. Note that this approach slightly favours NER-based solutions as the question metadata contains general types and named entity types but lack focus synsets, used by DeepER."
],
[
"The goal of the first experiment is to test how number a of documents retrieved from the search engine and analysed by the entity recognition techniques, influences the performance. Question classification errors have been bypassed as described in the previous paragraph. Additionally, two versions have been evaluated: with and without corrections of a retrieved set of documents. Figure FIGREF89 demonstrates results for different entity recognition techniques.",
"As we can see, if a retrieved set contains the desired article, adding new documents slightly increases recall, while precision drops observably. That is because additional irrelevant documents usually introduce noise. However, in some cases they are useful, as increasing recall indicates. On the other hand, if we have no guarantee of presence of the expected document in a list, it seems more desirable to extend it, especially for small sizes. For sets bigger than 50 elements, the noise factor again dominates our results. Judging by F1 measure, the optimal value is 20 documents.",
"When it comes to the comparison, it should be noted that DeepER performs noticeably better than traditional NER. The gain in precision is small, but recall is almost twice as big. It could be easily explained by the fact that the NER solutions are unable to handle UNNAMED_ENTITY type, which accounts for 36 per cent of the entity questions.",
"It is also worthwhile to check how the system performs while using different values of minimal confidence rate (Jaccard similarity), as described in section UID38 . It could become useful when we demand higher precision and approve lower recall ratio. The plot in figure FIGREF90 shows answering performance using DeepER with corrected question analysis with respect to the minimal confidence rate. Generally, the system behaves as expected, but the exact values disappoint. The precision remain at a level of 25-40 per cent up to confidence 0.75, where in turn recall drops to 0.35 per cent only. Values of F1 measure suggest that 0.2 is the highest sensible confidence rate.",
"One more parameter worth testing, explained in section UID34 , is the context generation strategy. To find the entity with a context most similar to a question content, we could analyse a single sentence, where it appears, or a sequence of words of a predefined length. For both of these solutions, we could also add a document title, as it is likely to be referred to by anaphoric expressions. Figure FIGREF91 shows the value of precision (recall does not depend on context) for these four solutions.",
"We can see that inclusion of a title in a context helps to achieve a better precision. The impact of anaphoric reference to title emerges clearly in case of flexible context – the difference grows with context size. Quite surprisingly, for the optimal context length (1.5 * question size), it is on the contrary. However, because of the small difference between the techniques including title, for the sake of simplicity, the single sentence is used in the final evaluation."
],
[
"To impose a realistic challenge to the system, the evaluation set, used at this stage, substantially differs from the one used during the development (see section SECREF80 ). A configuration for the final evaluation has been prepared based on results of the experiments. All of the tested versions share the following features:",
"no question analysis corrections,",
"question classification and query generation solutions which proved best in the previous experiments (see section SECREF18 ),",
"a retrieved set of documents including 20 articles,",
"no minimal confidence,",
"singe sentence context with title.",
"Tested solutions differ with respect to entity recognition only; RAFEL variants based on the following options are considered:",
"quantities recognizer (Quant),",
"traditional NER solutions: Nerf and Liner2,",
"deep entity recognition (DeepER),",
"hybrid approach, where entity mentions were gathered from all the above sources.",
"Table TABREF103 shows results of the final evaluation, expressed by recall, precision, F1 measure and Mean Reciprocal Rank (MRR). Standard deviations of these values have been obtained by bootstrap resampling of the test set. Additionally, precision obtained by automatic evaluation has been added, where applicable. As we can see, only a small percentage of questions is handled by the quantitative entities recognition. NER-based solutions deal with slightly more (Nerf) or less (Liner2) than a half of the questions. When using DeepER, the recall ratio rises to 73 per cent while the precision does not differ significantly. That is because UNNAMED_ENTITY questions (unreachable for traditional NER) account for a substantial part of the test set. The maximum recall is obtained by the hybrid solution (90 per cent) but it comes at a cost of lower precision (33 per cent). On the other hand, when we take the whole ranking lists into account, traditional NERs seem to perform better (in terms of MRR).",
"As expected, the automatic evaluation underestimates precision, but the difference remains below 5 per cent. Judging by F1 measure, the hybrid solution seems to beat the others."
],
[
"The main strength of DeepER compared to NER, according to results shown in figure TABREF103 , is much higher recall. Table TABREF106 shows examples of questions, to which only DeepER provides a correct answer. As we can see (notice question foci in the table), they could not be assigned to any of the traditional NE categories.",
"The other striking fact in the results is low precision. A part of the wrong answers was inspected and most of the errors seem to result from the following phenomena:",
"The entity recognizers also introduce errors typical for them:",
"The last remark applies also to other techniques. For example, consider a word kot, which means a cat. However, it is also a name of a journal, a lake, a village, a badge (KOT), a surname of 10 persons in the Polish Wikipedia and much more. A human would usually assume the most common meaning (a cat), but the system treats them as equally probable. It introduces noise in the process, as such an entity matches many types of questions.",
"Another thing that demands explanation is a difference in precision of answers found using Liner2 and DeepER: in evaluation set the latter does not maintain its advantage from development set. It could be explained by different compositions of the question sets (table TABREF85 ) – the development one contains much more questions beginning with ambiguous pronouns, followed by a question focus, e.g. Który poeta... (which poet), thus providing a precise synset (a poet) for deep entity recognition. Members of the evaluation set much more frequently begin with pronouns like Kto ...(who), where a synset corresponds to a general NE type (a person).",
"As RAFAEL is the first Polish QA system, able to answer by entities instead of documents, we can not compare it directly to any other solution. However, the evaluation set has been created based on questions published by BIBREF56 and used for evaluation of a document retrieval system BIBREF18 . Their baseline configuration achieved a@1 (percentage of questions answered by the first document, corresponds to precision in table TABREF103 ) equal 26.09 per cent. By taking into account proximity of keyword matches (MCSW method), they improved the result to 38.63 per cent. We can see that RAFAEL, despite solving much more challenging problem, in all configurations obtains better precision than baseline; using Liner2 it beats even the best method tested on this set (MCSW).",
"The results suggest two possible directions of future work to improve performance of RAFAEL. Firstly, involving semantics in sentence matching could solve some of the problems mentioned above. There are a lot of techniques in that area, also in QA systems (see a variety of them used by BIBREF39 ), but their implementation in a morphologically rich language would require a thorough study. For example, there exist techniques computing a semantic similarity based on a WordNet graph BIBREF57 , which is available for Polish and proved very useful in this study. Secondly, the relatively good performance of hybrid ER indicates that it may be good to apply different entity recognizer to different questions. For example, we could evaluate them for each question type separately and select the one that performs best for a given one. However, it would require much more training data to have a substantial number of questions of each type, including the scarce ones (observe sparsity of table TABREF85 ).",
"When it comes to DeepER, word ambiguity seem to be the main issue for future efforts. Of course, a full-lexicon precise word-sense disambiguation tool would solve the problem, but we can't expect it in near future. Instead, we could select a synset somewhere in a path between a focus synset and a named entity type. In the example from figure FIGREF54 rather than choosing between <prezydent.1, prezydent miasta.1> (president of a city) and <prezydent.2> (president of a country) we could use <urzędnik.1, biuralista.1> (official), which covers both meanings."
],
[
"This paper introduces RAFAEL, a complete open-domain question answering system for Polish. It is capable of analysing a given question, scanning a large corpus and extracting an answer, represented as a short string of text.",
"In its design, the focus has been on entity recognition techniques, used to extract all the entities compatible with a question from a given text. Apart from the traditional named entity recognition, differentiating between several broad categories of NEs, a novel technique, called Deep Entity Recognition (DeepER), has been proposed and implemented. It is able to find entities belonging to a given WordNet synset, using an entity library, gathered by interpreting definitions from encyclopaedia.",
"Automatic evaluation, provided by DeepER approach, has let to perform several experiments, showing answering accuracy with respect to different parameters. Their conclusions have been used to prepare final evaluation, which results have been checked manually. They suggest that the DeepER-based solution yields similar precision to NER, but is able to answer much more questions, including those beyond the traditional categories of named entities."
],
[
"As mentioned in section SECREF32 , apart from DeepER, RAFAEL employs also traditional NER-based solutions for entity recognition: NERF, Liner2 and Quant. Each of them uses its own typology of named entities, which covers only a part of the types, enumerated in section SECREF18 . Table TABREF118 shows a correspondence between these types. As we can see, there are a few problems:",
"The problems 3 and 4 are solved by an additional postprocessing code, extracting CENTURY from date and NAME and SURNAME from person_nam entities. In case of multi-segment person entities it assumes that the first and last word correspond to first and last name, respectively.",
"While NERF and Liner2 are standalone NER tools and details of their design are available in previously mentioned publications, Quant has been created specifically for RAFAEL. To find numbers, it annotates all chains of segments according to a predefined pattern, which accepts the following types of segments:",
"The pattern is matched in greedy mode, i.e. it adds as many new segments as possible. It could recognise expressions like 10 tysięcy (10 thousand), kilka milionów (several million), 10 000 or 1.698,88 (1,698.88).",
"Quantity is a sequence of segments, recognised as a number, followed by a unit of measurement. To check whether a word denotes a unit of measurement, the plWordNet is searched for lexemes equal to its base. Then it suffices to check whether it belongs to a synset, having <jednostka miary 1> (unit of measurement) as one of (direct or indirect) hypernyms, e.g. piętnaście kilogramów (fifteen kilograms) or 5 000 watów (5 000 watts)."
],
[
"Study was supported by research fellowship within \"Information technologies: research and their interdisciplinary applications\" agreement number POKL.04.01.01-00-051/10-00. Critical reading of the manuscript by Agnieszka Mykowiecka and Aleksandra Brzezińska is gratefully acknowledged."
]
]
}
|
{
"question": [
"Do they compare DeepER against other approaches?",
"How is the data in RAFAEL labelled?",
"How do they handle polysemous words in their entity library?"
],
"question_id": [
"63496705fff20c55d4b3d8cdf4786f93e742dd3d",
"7b44bee49b7cb39cb7d5eec79af5773178c27d4d",
"6d54bad91b6ccd1108d1ddbff1d217c6806e0842"
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"",
"",
""
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"FLOAT SELECTED: Table 3. Question answering accuracy of RAFAEL with different entity recognition strategies: quantities only (Quant), traditional NER (Nerf, Liner2 ), deep entity recognition (DeepER) and their combination (Hybrid)."
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 3. Question answering accuracy of RAFAEL with different entity recognition strategies: quantities only (Quant), traditional NER (Nerf, Liner2 ), deep entity recognition (DeepER) and their combination (Hybrid)."
]
}
],
"annotation_id": [
"3dd14ec7c6c2a4fa560f7cff98479063dda0e1c9"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Using a set of annotation tools such as Morfeusz, PANTERA, Spejd, NERF and Liner",
"evidence": [
"Secondly, texts go through a cascade of annotation tools, enriching it with the following information:",
"Morphosyntactic interpretations (sets of tags), using Morfeusz 0.82 BIBREF25 ,",
"Tagging (selection of the most probable interpretation), using a transformation-based learning tagger, PANTERA 0.9.1 BIBREF26 ,",
"Syntactic groups (possibly nested) with syntactic and semantic heads, using a rule-based shallow parser Spejd 1.3.7 BIBREF27 with a Polish grammar, including improved version of modifications by BIBREF28 , enabling lemmatisation of nominal syntactic groups,",
"Named entities, using two available tools: NERF 0.1 BIBREF29 and Liner2 2.3 BIBREF30 ."
],
"highlighted_evidence": [
"Secondly, texts go through a cascade of annotation tools, enriching it with the following information:\n\nMorphosyntactic interpretations (sets of tags), using Morfeusz 0.82 BIBREF25 ,\n\nTagging (selection of the most probable interpretation), using a transformation-based learning tagger, PANTERA 0.9.1 BIBREF26 ,\n\nSyntactic groups (possibly nested) with syntactic and semantic heads, using a rule-based shallow parser Spejd 1.3.7 BIBREF27 with a Polish grammar, including improved version of modifications by BIBREF28 , enabling lemmatisation of nominal syntactic groups,\n\nNamed entities, using two available tools: NERF 0.1 BIBREF29 and Liner2 2.3 BIBREF30 .\n\n"
]
}
],
"annotation_id": [
"1075c87b188f9958978397a9f9589fc0136d8fca"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"only the first word sense (usually the most common) is taken into account"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Figure FIGREF54 shows an exemplary process of converting the first paragraph of a Polish Wikipedia entry, describing former Polish president Lech Wałęsa, into a list of WordNet synsets. First, we omit all unessential parts of the paragraph (1). This includes text in brackets or quotes, but also introductory expressions like jeden z (one of) or typ (type of). Then, an entity name is detached from the text by matching one of definition patterns (2). In the example we can see the most common one, a dash (–). Next, all occurrences of separators (full stops, commas and semicolons) are used to divide the text into separate chunks (3). The following step employs shallow parsing annotation – only nominal groups that appear at the beginning of the chunks are passed on (4). The first chunk that does not fulfil this requirement and all its successors get excluded from further analysis (4.1). Finally, we split the coordination groups and check, whether their lemmas correspond to any lexemes in WordNet (5). If not, the process repeats with the group replaced by its semantic head. In case of polysemous words, only the first word sense (usually the most common) is taken into account."
],
"highlighted_evidence": [
"In case of polysemous words, only the first word sense (usually the most common) is taken into account."
]
}
],
"annotation_id": [
"3ed4ab7fb1ef561174c750eaf67ea3cc23b8d73b"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
]
}
|
{
"caption": [
"Fig. 1. Overall architecture of the QA system – RAFAEL. See descriptions of elements in text.",
"Fig. 2. Outline of a question focus analysis procedure used to determine an entity type in case of ambiguous interrogative pronouns.",
"Fig. 3. Example of the entity extraction process in DeepER, transforming a Wikipedia entry of Lech Wałęsa into a list of synsets.",
"Table 1. A distribution of different general types and named entity types in development (1130 questions) and final evaluation (576 questions) sets.",
"Table 2. Exemplary questions with their types (general and named entity), expected source articles and answers.",
"Fig. 4. Question answering performance with respect to size of a retrieved set of documents, undergoing a full analysis. Two versions are considered – with and without guaranteed presence of an article, containing the desired information, in a set. The results for different entity recognition techniques– traditional NER (Nerf, Liner2) and DeepER.",
"Fig. 5. RAFAEL performance with respect to minimal confidence rate. Results computed using DeepER with corrected question type and corrected list of 50 documents.",
"Fig. 6. Question answering performance for different context generation strategies: single sentence and sequence of segments of certain length. Both types considered with and without an article title added.",
"Table 3. Question answering accuracy of RAFAEL with different entity recognition strategies: quantities only (Quant), traditional NER (Nerf, Liner2 ), deep entity recognition (DeepER) and their combination (Hybrid).",
"Table 4. Examples of questions which have been handled and answered correctly only with the DeepER approach. Their foci lie beyond areas covered by the NE categories.",
"Table 5. Correspondence between named entity types from question analysis and supported by different NER solutions."
],
"file": [
"4-Figure1-1.png",
"6-Figure2-1.png",
"12-Figure3-1.png",
"15-Table1-1.png",
"16-Table2-1.png",
"18-Figure4-1.png",
"19-Figure5-1.png",
"19-Figure6-1.png",
"20-Table3-1.png",
"21-Table4-1.png",
"23-Table5-1.png"
]
}
|
1709.08858
|
Polysemy Detection in Distributed Representation of Word Sense
|
In this paper, we propose a statistical test to determine whether a given word is used as a polysemic word or not. The statistic of the word in this test roughly corresponds to the fluctuation in the senses of the neighboring words a nd the word itself. Even though the sense of a word corresponds to a single vector, we discuss how polysemy of the words affects the position of vectors. Finally, we also explain the method to detect this effect.
|
{
"section_name": [
"Introduction",
"Related Work",
"Senses and Contexts",
"Proposed Method",
"Experimental Settings and Examples of Calculation",
"Evaluation",
"Error analysis",
"Discussion",
"Conclusion"
],
"paragraphs": [
[
"Distributed representation of word sense provides us with the ability to perform several operations on the word. One of the most important operations on a word is to obtain the set of words whose meaning is similar to the word, or whose usage in text is similar to the word. We call this set the neighbor of the word. When a word has several senses, it is called a polysemic word. When a word has only one sense, it is called a monosemic word. We have observed that the neighbor of a polysemic word consists of words that resemble the primary sense of the polysemic word. We can explain this fact as follows. Even though a word may be a polysemic, it usually corresponds to a single vector in distributed representation. This vector is primarily determined by the major sense, which is most frequently used. The information about a word's minor sense is subtle, and the effect of a minor sense is difficult to distinguish from statistical fluctuation.",
"To measure the effect of a minor sense, this paper proposes to use the concept of surrounding uniformity. The surrounding uniformity roughly corresponds to statistical fluctuation in the vectors that correspond to the words in the neighbor. We have found that there is a difference in the surrounding uniformity between a monosemic word and a polysemic word. This paper describes how to compute surrounding uniformity for a given word, and discuss the relationship between surrounding uniformity and polysemy."
],
[
"The distributed word representation can be computed as weight vectors of neurons, which learn language modeling BIBREF0 . We can obtain a distributed representation of a word using the Word2Vec software BIBREF1 which enable us to perform vector addition/subtraction on a word's meaning. The theoretical background is analyzed by BIBREF2 , where the operation is to factorize a word-context matrix, where the elements in the matrix are some function of the given word and its context pairs. This analysis gives us insight into how the vector is affected by multiple senses or multiple context sets. If a word has two senses, the obtained representation for the word will be a linearly interpolated point between the two points of their senses.",
"The importance of multiple senses is well recognized in word sense detection in distributed representation. The usual approach is to compute the corresponding vectors for each sense of a word BIBREF3 , BIBREF4 . In this approach, first, the context is clustered. Then, the vector for each cluster is computed. However, the major problem faced by this approach is that all target words need to be assumed as polysemic words first, and their contexts are always required to be clustered. Another approach is to use external language resources for word sense, and to classify the context BIBREF5 . The problem with this approach is that it requires language resources of meanings to obtain the meaning of a polysemic word. If we know whether a given word is polysemic or monosemic thorough a relatively simple method, we can concentrate our attention on polysemic words."
],
[
"In this paper, we assume that the sense of a word is determined by the distribution of contexts in which the word appears in a given corpus. If a word comes to be used in new contexts, the word comes to have a new sense. If we could have an infinitely sizes corpus, this sense might converge into the sense in the dictionary. In reality, the size of the corpus in hand is limited, and some senses indicated in a dictionary may not appear in the corpus. The distinction between the senses in a dictionary and the senses in the corpus is important in this paper, because it is crucial for discussing polysemy. All discussions in this paper depend on the corpus in hand. We now use the FIL9 corpus (http://mattmahoney.net/dc/textdata), which primarily consists of a description of believed facts, rather than conversations. We can expect that the senses that are mainly used in conversation would not appear in this corpus.",
"In this paper, we analyze auxiliary verbs, which are polysemic words from a dictionary. If the corpus is limited to a description of believed facts, we may regard auxiliary verbs as monosemic words, since their contexts are limited. In addition, we particularly analyze the relationship between the auxiliary verb “may”, and name of the month “May”. In the dictionary, these two are regarded as two different words, rather than as two different senses of one word. By ignoring the upper/lower case characters, these two words have same character sequence and the word “may” becomes a polysemic word, which has two types of context in the given corpus."
],
[
"Our proposed method is based on the following measures. Let $\\vec{w}$ be the vector corresponding to the given word. Let $N$ be the size of the neighbor, such as 4. First, we choose $N$ neighboring words whose angle with the given word is the smallest. This operation is already implemented in the Word2Vec software. Let $\\vec{a_i}$ ( $\\vec{w}$ ) be the vectors corresponding to $i$ th vector of the neighbor of the word.",
"We choose the uniformity of vectors, which can be regarded as general case of triangle inequality. The uniformity of a set of vectors is a ratio, i.e., the size of the vector of the vector addition of the vectors divided by the scalar sum of the sizes of the vectors. If and only if all directions of the vectors are the same, the uniformity becomes 1.0. We compute this uniformity for the neighbors, including the word itself. Surrounding Uniformity (SU) can be expressed as follows: $SU(\\vec{w}) = \\frac{|\\vec{s}(\\vec{w})|}{|\\vec{w}| + \\sum _{i}^{N}|\\vec{a_i}(\\vec{w})|}$ ",
"where $\\vec{s}(\\vec{w}) = \\vec{w} + \\sum _{i}^{N} \\vec{a_i}(\\vec{w}).$ ",
"When computing SU, we consider the set of words whose vectors are reliable. We choose these words as the most frequently appearing words in corpus. The size of words is denoted as $limit$ . If a word is not in this set, or the word does not have sufficient number of neighbors in this set, we consider that the value of SU is undefined, and that the word does not have this value.",
"Our method performs a statistical test to determine whether a given word is used polysemously in the text, according to the following steps:",
"This is a basic statistical test BIBREF6 to detect outliers.",
"Note that we cannot compute the variance if some $a_i$ does not have the value of SU. Further, it may be also possible that all $a_i$ may have the same SU, sharing identical neighbors. In this case, the variance becomes an extreme value, that is, 0. In these cases, we consider that we cannot perform the statistical test."
],
[
"We used FIL9, which is freely available as the test corpus for Word2Vec and is derived from Wikipedia. We compute 200-dimensional distributed vector representations with default parameter. In this situation, all-uppercase are converted into lower case. This is why all proper nouns are in lower case in this example. First we selected stable words as the 1000 words that appear most frequently in the text. We compute surrounding uniformity of these words. We define the given word $w$ and its neighboring word $a_i$ are limited to stable words. We then determine the search scope for stable neighboring words and set $N$ , which is the number of neighbors used to compute the surrounding uniformity, to 4. For example, if there are 7 stable words in the search scope, we use only the top 4 words to compute the surrounding uniformity.",
"Table 1 shows the uniformity of auxiliary verbs in this setting. We were able to compute the surrounding uniformity for 160 words; for the remaining 840 words, there were fewer than the required 4 stable neighboring words in the search scope and the surrounding uniformity could not be determined.",
"For the case of the word “may”, neighbor words are “can”, “should”, “might”, and “will”. Their surrounding uniformities are, 0.9252 (“can”), 0.9232 (“should”), 0.9179 (“might”), and 0.9266 (“will”). Then $m$ is equal to 0.9232, and $\\sigma $ is equal to 0.0038. Therefore, $m-3\\sigma $ is 0.9118, which is greater than 0.8917 (“may”). Since the surrounding uniformity of the word “may” is regarded as an outlier, we think of “may” as polysemic. In this setting, the word “may” is polysemic because the program works in a case-insensitive mode, and the word “may” could be both an auxiliary verb and the name of a month.",
"The next example is the word “might”, whose surrounding uniformity is smaller than every neighbor word. For the word “might”, neighbor words are “would”, “could”, “should”, and “cannot”. Their surrounding uniformities are 0.9266 (“would”), 0.9290 (“could”), 0.9232 (“should”), and 0.9224 (“cannot”). Hence, $m$ is equal to 0.9253, and $\\sigma $ is equal to 0.0032. Therefore, $m-3\\sigma $ is 0.9157, which is less than 0.9179 (“might”). We cannot say 0.9179 is an outlier, and thus we cannot say the word “might” is polysemic.",
"Figure 1 shows the distribution of vectors.",
"The vector of “may” is placed in the interpolated position between “may” as an auxiliary verb and “may” as the name of a month. Since the word “may” is more frequently used as auxiliary verb, the vector is placed near other auxiliary verbs. However, the position of “may” could be an outlier for other auxiliary verbs.",
"In addition, we should show the results of names of months because these names will have the same contexts when the word is used as the name of a month. The word “may” has other contexts as auxiliary verbs. The word “august” has the sense of an adjective in the dictionary. The word “march” has a sense of a verb. Other names are monosemic words in the dictionary. Table 2 shows the surrounding uniformity for all the names of the months.",
"If we apply the test, only the word “may” passes the test. The example that fails the test is the word “august”, whose surrounding uniformity is also smaller than every neighbor word. For the case of the word “august”, $m$ is equal to 0.9808, and $\\sigma $ is equal to 0.0005. Therefore, $m-3\\sigma $ becomes 0.9793, which is less than 0.9802 (“august”). We cannot say the word “august” is polysemic, but the value of uniformity is very close to the lower bound. Other names have a greater uniformity than the corresponding lower bound. In summary, the proposed method can detect the polysemic “may”, but cannot detect the polysemicity of “august” and “march”.",
"Although we can claim nothing if the statistical test fails, even the negatives have a practical value for this test. For the case of the word “august”, it can be used as an adjective. Although we cannot say the word “august” is polysemic from the proposed procedure, we cannot claim that the word “august” is monosemic. We think this failure is caused by a few, if any, contexts of “august” as an adjective. In that case, the clustering context will be difficult in practice. Therefore, the proposed test will be meaningful even for a negative result, when the result is used to judge whether further analysis of the context is worthwhile. This discussion should be also true for the word “march”, which may be used as a verb.",
"There are other interesting words for which the proposed method detects polysemicity. These words are “james”, “mark”, and “bill”. The neighboring words are names of persons, such as “john”, “richard”, “robert”, “william”, “david”, “charles”, “henry”, “thomas”, “michael”, and “edward”. “mark” and “bill” have the same spell of the regular noun. The word “james” does not have such words and is subject to error analysis."
],
[
"First, we set the value of $limit$ to 1000, and $N$ to 4. We then performed the statistical test of these 1000 words. From these, 33 words passed test, and we assume that these words belong to the set POLY. Further, we are unable to performs the statistical test for 127 words. We say that the remaining 840 words belong to the set MONO.",
"As evaluation, we attempted to measure the agreement of human judgment for the all words of POLY and MONO. However, during the valuation, we found that many of the errors come from the problem of Word2Vec. For example, the vector of “sir” and the vector of “william” are very close because “sir william” should be very close to “william”. This is similar for “w” and “george\".",
"Therefore, we first selected words whose 10 neighboring words seem reasonable neighbors for human judgments, and performed human judgments of polysemicity. We also focused the words that have bigger SU than 0.75. This is because the statistical test will be reliable when SU is large. Table 3 shows that list of words that passed the test, and have higher SU than 0.75.",
"Table 3 shows all the words in POLY that are judged by human. Similarly Table 4 shows all the words in MONO that are judged by human.",
"We have sampled words from MONO because there are many words in MONO. In these tables, the SU of surrounding words are also presented.",
"Table 5 shows the confusion matrix for computer human judgment.",
"As there exists a case for which the number is less than or equal to 5, we need Yate's continuity correction. It achieves statistical significance with level of $\\alpha =0.05$ . The disagreement in POLY in Table 5 for the word “james” attracted our attention."
],
[
"The disagreement in MONO could be because we chose $3\\sigma $ , which can detect polysemicity in extremely apparent cases. Even so, the word “james” passes the proposed statistical test. Therefore, the word “james” is worth investing in.",
"After examining the context of “james”, we found that it can be used as the name of river and a person. Table 6 shows the various names and how many times the name is used with the word “river”.",
"The word “james” is most frequently used with “river”. This may make the word pass the statistical test."
],
[
"The majority of the polysemicity presented in this paper exists due to the Word2Vec compute the distributed representation after ignoring cases. This polysemicity might not be regarded as polysemicity with more careful preprocessing.",
"The behavior of proposed method depends on the Word2Vec options and the size of the corpus. If Word2Vec does not have a reasonable neighbor that consists of words of similar usage, the proposed method cannot work effectively. In addition, a problem arising due the use of Word2Vec for our application is the placement of the vector “sir” and the vector “william” in similar position. Therefore, we may need to utilize another method to compute the distributed representation of words. We use the FIL9 corpus for the experiment. Though this corpus is freely available to everyone, the size may not be sufficient. Although we can detect the polysemicity of “may”, we cannot detect the polysemicity of “august” and “march”. The statistical test cannot detect the right answer if we do not have sufficient data; therefore, this failure may be interpreted as insufficient usage of “march” as verb, and “august” as adverb, owing to its origin from Wikipedia, which is in essence a description of facts.",
"We believe we need to find a way to select the number of neighbors to improve the accuracy of the test. To make the statistical test more accurate, we need more samples from the neighbors. At the same time, since we assume that we can measure the statistical fluctuation from the neighbors, we need to exclude words of a different nature from the neighbors. It is natural that the right number for a neighbor may be different according to the word. The number that we choose is the minimum value for the statistical test, and has room to adjust for improvement.",
"We computed the neighbor and surrounding uniformity of the 1000 most frequently used words in FIL9. We observed that proper nouns tend to have a large surrounding uniformity, whereas prepositions tend to have a small surrounding uniformity. It is an interesting observation that the surrounding uniformity reflects the part of speech information, although it is difficult to determine the class of a word from the value of the surrounding uniformity alone. For the ease of confirming this observation, the obtained table can be downloaded from the reference (http://www.ss.cs.tut.ac.jp/FIL9SU/)."
],
[
"In this paper, we proposed a method to detect polysemy based on the distributed representation by Word2Vec. We computed the surrounding uniformity of word vector and formed a statistical test. We illustrated several examples to this measure, and explained the statistical test for detecting polysemy. In addition, we have also discussed the feasibility of this test."
]
]
}
|
{
"question": [
"How is the fluctuation in the sense of the word and its neighbors measured?"
],
"question_id": [
"238ec3c1e1093ce2f5122ee60209b969f7669fae"
],
"nlp_background": [
""
],
"topic_background": [
"familiar"
],
"paper_read": [
"no"
],
"search_query": [
""
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Our method performs a statistical test to determine whether a given word is used polysemously in the text, according to the following steps:\n1) Setting N, the size of the neighbor.\n2) Choosing N neighboring words ai in the order whose angle with the vector of the given word w is the smallest.\n3) Computing the surrounding uniformity for ai(0 < i ≤ N) and w.\n4) Computing the mean m and the sample variance σ for the uniformities of ai .\n5) Checking whether the uniformity of w is less than m − 3σ. If the value is less than m − 3σ, we may regard w as a polysemic word.",
"evidence": [
"Distributed representation of word sense provides us with the ability to perform several operations on the word. One of the most important operations on a word is to obtain the set of words whose meaning is similar to the word, or whose usage in text is similar to the word. We call this set the neighbor of the word. When a word has several senses, it is called a polysemic word. When a word has only one sense, it is called a monosemic word. We have observed that the neighbor of a polysemic word consists of words that resemble the primary sense of the polysemic word. We can explain this fact as follows. Even though a word may be a polysemic, it usually corresponds to a single vector in distributed representation. This vector is primarily determined by the major sense, which is most frequently used. The information about a word's minor sense is subtle, and the effect of a minor sense is difficult to distinguish from statistical fluctuation.",
"To measure the effect of a minor sense, this paper proposes to use the concept of surrounding uniformity. The surrounding uniformity roughly corresponds to statistical fluctuation in the vectors that correspond to the words in the neighbor. We have found that there is a difference in the surrounding uniformity between a monosemic word and a polysemic word. This paper describes how to compute surrounding uniformity for a given word, and discuss the relationship between surrounding uniformity and polysemy.",
"We choose the uniformity of vectors, which can be regarded as general case of triangle inequality. The uniformity of a set of vectors is a ratio, i.e., the size of the vector of the vector addition of the vectors divided by the scalar sum of the sizes of the vectors. If and only if all directions of the vectors are the same, the uniformity becomes 1.0. We compute this uniformity for the neighbors, including the word itself. Surrounding Uniformity (SU) can be expressed as follows: $SU(\\vec{w}) = \\frac{|\\vec{s}(\\vec{w})|}{|\\vec{w}| + \\sum _{i}^{N}|\\vec{a_i}(\\vec{w})|}$",
"where $\\vec{s}(\\vec{w}) = \\vec{w} + \\sum _{i}^{N} \\vec{a_i}(\\vec{w}).$"
],
"highlighted_evidence": [
"One of the most important operations on a word is to obtain the set of words whose meaning is similar to the word, or whose usage in text is similar to the word. We call this set the neighbor of the word.",
"We have observed that the neighbor of a polysemic word consists of words that resemble the primary sense of the polysemic word.",
" Even though a word may be a polysemic, it usually corresponds to a single vector in distributed representation. This vector is primarily determined by the major sense, which is most frequently used. The information about a word's minor sense is subtle, and the effect of a minor sense is difficult to distinguish from statistical fluctuation.",
"To measure the effect of a minor sense, this paper proposes to use the concept of surrounding uniformity.",
"The surrounding uniformity roughly corresponds to statistical fluctuation in the vectors that correspond to the words in the neighbor",
"Surrounding Uniformity (SU) can be expressed as follows: $SU(\\vec{w}) = \\frac{|\\vec{s}(\\vec{w})|}{|\\vec{w}| + \\sum _{i}^{N}|\\vec{a_i}(\\vec{w})|}$\n\nwhere $\\vec{s}(\\vec{w}) = \\vec{w} + \\sum _{i}^{N} \\vec{a_i}(\\vec{w}).$"
]
}
],
"annotation_id": [
"107800957bb3f9cc126bc15bd4413355fdfe15dc"
],
"worker_id": [
"74eea9f3f4f790836045fcc75d0b3f5156901499"
]
}
]
}
|
{
"caption": [
"TABLE I AUXILIARY VERBS, THEIR NEIGHBORING WORDS, AND SURROUNDING UNIFORMITIES. THE NEIGHBORING WORDS OF AN AUXILIARY VERB CONSIST OF OTHER AUXILIARY VERBS. THE WORD “MAY” HAS A SMALL SURROUNDING UNIFORMITY, ALTHOUGH ITS NEIGHBORING WORDS CONSIST OF AUXILIARY VERBS.",
"TABLE II NAMES OF THE MONTHS, THEIR NEIGHBORING WORDS, AND SURROUNDING UNIFORMITIES. ONLY “MAY”, WHICH HAS THE SMALLEST SURROUNDING UNIFORMITY, PASS THE STATISTICAL TEST. ALTHOUGH THE WORD “MAY” MIGHT BE USED AS THE NAME OF A MONTH, THE CORRESPONDING VECTOR IS NEAR THE AUXILIARY VERBS.",
"TABLE III EVALUATED WORDS AND ITS NEIGHBOR THAT PASSES THE STATISTICAL TEST.",
"TABLE IV EVALUATED WORDS THAT DOES NOT PASS THE STATISTICAL TEST.",
"TABLE V CONFUSION MATRIX OF THE AGREEMENT BETWEEN COMPUTER AND HUMAN JUDGMENTS. IT SHOWS STATISTICAL SIGNIFICANCE BY USING X2 TEST.",
"TABLE VI FREQUENCIES OF A PERSON’S NAME AND THE NAME FOLLOWED BY THE WORD “RIVER”. THE NAME“JAMES” IS THE MOST FREQUENTLY USED NAME WITH THE WORD “RIVER”."
],
"file": [
"3-TableI-1.png",
"4-TableII-1.png",
"5-TableIII-1.png",
"5-TableIV-1.png",
"5-TableV-1.png",
"5-TableVI-1.png"
]
}
|
1706.03610
|
Neural Domain Adaptation for Biomedical Question Answering
|
Factoid question answering (QA) has recently benefited from the development of deep learning (DL) systems. Neural network models outperform traditional approaches in domains where large datasets exist, such as SQuAD (ca. 100,000 questions) for Wikipedia articles. However, these systems have not yet been applied to QA in more specific domains, such as biomedicine, because datasets are generally too small to train a DL system from scratch. For example, the BioASQ dataset for biomedical QA comprises less then 900 factoid (single answer) and list (multiple answers) QA instances. In this work, we adapt a neural QA system trained on a large open-domain dataset (SQuAD, source) to a biomedical dataset (BioASQ, target) by employing various transfer learning techniques. Our network architecture is based on a state-of-the-art QA system, extended with biomedical word embeddings and a novel mechanism to answer list questions. In contrast to existing biomedical QA systems, our system does not rely on domain-specific ontologies, parsers or entity taggers, which are expensive to create. Despite this fact, our systems achieve state-of-the-art results on factoid questions and competitive results on list questions.
|
{
"section_name": [
"Introduction",
"Model",
"Input Layer",
"Output Layer",
"Decoding",
"Domain Adaptation",
"Datasets",
"Training",
"Evaluation",
"Ensemble",
"Comparison to competing BioASQ systems",
"Qualitative Analysis",
"Discussion and future work",
"Conclusion",
"Acknowledgments"
],
"paragraphs": [
[
"Question answering (QA) is the task of retrieving answers to a question given one or more contexts. It has been explored both in the open-domain setting BIBREF0 as well as domain-specific settings, such as BioASQ for the biomedical domain BIBREF1 . The BioASQ challenge provides $\\approx 900$ factoid and list questions, i.e., questions with one and several answers, respectively. This work focuses on answering these questions, for example: Which drugs are included in the FEC-75 regimen? $\\rightarrow $ fluorouracil, epirubicin, and cyclophosphamide.",
"We further restrict our focus to extractive QA, i.e., QA instances where the correct answers can be represented as spans in the contexts. Contexts are relevant documents which are provided by an information retrieval (IR) system.",
"Traditionally, a QA pipeline consists of named-entity recognition, question classification, and answer processing steps BIBREF2 . These methods have been applied to biomedical datasets, with moderate success BIBREF3 . The creation of large-scale, open-domain datasets such as SQuAD BIBREF4 have recently enabled the development of neural QA systems, e.g., wang2016machine, dcn, seo2016bidirectional, weissenborn2017fastqa, leading to impressive performance gains over more traditional systems.",
"However, creating large-scale QA datasets for more specific domains, such as the biomedical, would be very expensive because of the need for domain experts, and therefore not desirable. The recent success of deep learning based methods on open-domain QA datasets raises the question whether the capabilities of trained models are transferable to another domain via domain adaptation techniques. Although domain adaptation has been studied for traditional QA systems BIBREF5 and deep learning systems BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , it has to our knowledge not yet been applied for end-to-end neural QA systems.",
"To bridge this gap we employ various domain adaptation techniques to transfer knowledge from a trained, state-of-the-art neural QA system (FastQA, weissenborn2017fastqa) to the biomedical domain using the much smaller BioASQ dataset. In order to answer list questions in addition to factoid questions, we extend FastQA with a novel answering mechanism. We evaluate various transfer learning techniques comprehensively. For factoid questions, we show that mere fine-tuning reaches state-of-the-art results, which can further be improved by a forgetting cost regularization BIBREF9 . On list questions, the results are competitive to existing systems. Our manual analysis of a subset of the factoid questions suggests that the results are even better than the automatic evaluation states, revealing that many of the \"incorrect\" answers are in fact synonyms to the gold-standard answer."
],
[
"Our network architecture is based on FastQA BIBREF15 , a state-of-the-art neural QA system. Because the network architecture itself is exchangeable, we treat it as a black box, with subtle changes at the input and output layer as well as to the decoding and training procedure. These changes are described in the following. See Figure 1 for an overview of the system."
],
[
"In a first step, words are embedded into a high-dimensional vector space. We use three sources of embeddings, which are concatenated to form a single embedding vector:",
"GloVe embeddings: 300-dimensional GloVe vectors BIBREF14 . These are open-domain word vectors trained on 840 billion tokens from web documents. The vectors are not updated during training.",
"Character embeddings: As used in FastQA BIBREF15 and proposed originally by seo2016bidirectional, we employ a 1-dimensional convolutional neural network which computes word embeddings from the characters of the word.",
"Biomedical Word2Vec embeddings: 200-dimensional vectors trained using Word2Vec BIBREF18 on about 10 million PubMed abstracts BIBREF19 . These vectors are specific to the biomedical domain and we expect them to help on biomedical QA.",
"As an optional step, we add entity tag features to the token embeddings via concatenation. Entity tags are provided by a dictionary-based entity tagger based on the UMLS Metathesaurus. The entity tag feature vector is a 127-dimensional bit vector that for each of the UMLS semantic types states whether the current token is part of an entity of that type. This step is only applied if explicitly noted.",
"Finally, a one-hot encoding of the question type (factoid or list) is appended to all the input vectors. With these embedding vectors as input, we invoke FastQA to produce start and end scores for each of the $n$ context tokens. We denote start scores by $y_{start}^{i}$ and end scores conditioned on a predicted start at position $i$ by $y_{end}^{i, j}$ , with start index $i \\in [1, n]$ and end index $j \\in [i, n]$ ."
],
[
"In our adapted output layer, we convert the start and end scores to span probabilities. The computation of these probabilities is independent of the question type. The interpretation, however, depends on the question type: While for factoid questions, the list of answer spans is interpreted as a ranked list of answer candidates, for list questions, answers above a certain probability threshold are interpreted as the set of answers to the question.",
"Given the start scores $y_{start}^1, ..., y_{start}^n$ and end scores $y_{end}^{i, 1}, ..., y_{end}^{i, n}$ , we compute the start and end probabilities as follows: ",
"$$p_{start}^i = \\sigma (y_{start}^i)$$ (Eq. 16) ",
"$$p_{end}^{i, \\cdot } = \\operatorname{softmax}(y_{end}^{i, \\cdot })$$ (Eq. 17) ",
"where $\\sigma (x)$ is the sigmoid function. As a consequence, multiple tokens can be chosen as likely start tokens, but the network is expected to select a single end token for a given start token, hence the $\\operatorname{softmax}$ function. Finally, the probability that a given span $(i, j)$ answers the question is $p_{span}^{i, j} = p_{start}^{i} \\cdot p_{end}^{i, j}$ . This extension generalizes the FastQA output layer such that multiple answer spans with different start positions can have a high probability, allowing us to retrieve multiple answers for list questions."
],
[
"Given a trained model, start probabilities can be obtained by running a forward pass and computing the start probability as in Equation 16 . For the top 20 starts, we compute the end probabilities as given by Eq. 17 . From the start and end probabilities, we extract the top 20 answer spans ranked by $p_{span}^{i, j}$ . As a simple post-processing step, we remove duplicate strings and retain only those with the highest probability.",
"For factoid questions, we output the 5 most likely answer spans as our ranked list of answers. For list questions, we learn a probability cutoff threshold $t$ that defines the set of list answers $A = \\lbrace (i, j) | p_{span}^{i, j} \\ge t\\rbrace $ . We choose $t$ to be the threshold that optimizes the list F1 score on the respective development set."
],
[
"Our training procedure consists of two phases: In the pre-training phase, we train the model on SQuAD, using a token F1 score as the training objective as by weissenborn2017fastqa. We will refer to the resulting parameters as the base model. In the fine-tuning phase, we initialize the model parameters with the base model and then continue our optimization on the BioASQ dataset with a smaller learning rate.",
"To avoid catastrophic forgetting during fine-tuning as a means to regularize our model, we optionally add an additional forgetting cost term $L_{fc}$ , as proposed by riemer2017forgettingcost. It is defined as the cross-entropy loss between the current predictions and the base model's predictions.",
"We also add an L2 loss term $L_{l2}$ which penalizes deviations from the base model's parameters. Note that a more advanced approach would be to apply this loss selectively on weights which are particularly important in the source domain BIBREF10 . The final loss is computed as $L_{final} = L_{original} + C_{fc} \\cdot L_{fc} + C_{l2} \\cdot L_{l2}$ where $C_{fc}$ and $C_{l2}$ are hyperparameters which are set to 0 unless otherwise noted.",
"In this section, we evaluate various domain adaptation techniques. The results of the experiments are summarized in Table 1 .",
"As a baseline without transfer learning, Experiment 1 trains the model on BioASQ only. Because the BioASQ dataset by itself is very small, a dropout rate of $0.7$ was used, because it worked best in preliminary experiments. We observe a rather low performance, which is expected when applying deep learning to such a small dataset.",
"Experiments 2 and 3 evaluate the pure fine-tuning approach: Our base model is a system trained on SQuAD only and tested on BioASQ (Experiment 2). For Experiment 3, we fine-tuned the base model on the BioASQ4B training set. We observe that performance increases significantly, especially on list questions. This increase is expected, because the network is trained on biomedical- and list questions, which are not part of the SQuAD dataset, for the first time. Overall, the performance of the fine-tuned model on both question types is much higher than the baseline system without transfer learning.",
"In order to evaluate the impact of using biomedical word embeddings, we repeat Experiment 3 without them (Experiment 4). We see a factoid and list performance drop of $3.3$ and $1.2$ percentage points, respectively, showing that biomedical word embeddings help increase performance.",
"In Experiment 5, we append entity features to the word vector, as described in Section \"Input Layer\" . Even though these features provide the network with domain-specific knowledge, we found that it actually harms performance on factoid questions. Because most of the entity features are only active during fine-tuning with the small dataset, we conjecture that the performance decrease is due to over-fitting.",
"We continue our study with techniques to combat catastrophic forgetting as a means to regularize training during fine-tuning. In Experiment 6 of Table 1 we fine-tune the base model on a half-half mixture of BioASQ and SQuAD questions (BioASQ questions have been upsampled accordingly). This form of joint training yielded no significant performance gains. Experiment 7 regularizes the model via an additional forgetting cost term, as proposed by riemer2017forgettingcost and explained in Section \"Domain Adaptation\" . We generally found that this technique only increases performance for factoid questions where the performance boost was largest for $C_{fc} = 100.0$ . The fact that the forgetting loss decreases performance on list questions is not surprising, as predictions are pushed more towards the predictions of the base model, which has very poor performance on list questions.",
"Experiment 8 adds an L2 loss which penalizes deviations from the base model's parameters. We found that performance decreases as we increase the value of $C_{l2}$ which shows that this technique does not help at all. For the sake of completeness we report results for $C_{l2} = 0.3$ , the lowest value that yielded a significant drop in performance."
],
[
"SQuAD BIBREF4 is a dataset of $\\approx 100,000$ questions with relevant contexts and answers that sparked research interest into the development of neural QA systems recently. The contexts are excerpts of Wikipedia articles for which crowd-source workers generated questions-answer pairs. Because of the large amount of training examples in SQuAD, it lends itself perfectly as our source dataset.",
"The BioASQ challenge provides a biomedical QA dataset BIBREF1 consisting of questions, relevant contexts (called snippets) from PubMed abstracts and possible answers to the question. It was carefully created with the help of biomedical experts.",
"In this work, we focus on Task B, Phase B of the BioASQ challenge, in which systems must answer questions from gold-standard snippets. These questions can be either yes/no questions, summary questions, factoid questions, or list questions. Because we employ an extractive QA system, we restrict this study to answering factoid and list questions by extracting answer spans from the provided contexts.",
"The 2017 BioASQ training dataset contains $1,799$ questions, of which 413 are factoid and 486 are list questions. The questions have $\\approx 20$ snippets on average, each of which are on average $\\approx 34$ tokens long. We found that around $65\\%$ of the factoid questions and around $92\\%$ of the list questions have at least one extractable answer. For questions with extractable answers, answers spans are computed via a simple substring search in the provided snippets. All other questions are ignored during training and treated as answered incorrectly during evaluation."
],
[
"We minimize the cross-entropy loss for the gold standard answer spans. However, for multiple answer spans that refer to the same answer (e.g. synonyms), we only minimize the loss for the span of the lowest loss. We use the ADAM BIBREF20 for optimization on SQuAD with a learning rate starting at $10^{-3}$ which is halved whenever performance drops between checkpoints. During the fine-tuning phase, we continue optimization on the BioASQ dataset with a smaller learning rate starting at $10^{-4}$ . During both phases, the model is regularized by variational dropout of rate $0.5$ BIBREF21 ."
],
[
"The official evaluation measures from BioASQ are mean reciprocal rank (MRR) for factoid questions and F1 score for list questions . For factoid questions, the list of ranked answers can be at most five entries long. The F1 score is measured on the gold standard list elements. For both measures, case-insensitive string matches are used to check the correctness of a given answer. A list of synonyms is provided for all gold-standard answers. If the system's response matches one of them, the answer counts as correct.",
"For evaluation, we use two different fine-tuning datasets, depending on the experiment: BioASQ3B, which contains all questions of the first three BioASQ challenges, and BioASQ4B which additionally contains the test questions of the fourth challenge. BioASQ4B is used as the training dataset for the fifth BioASQ challenge whereas BioASQ3B was used for training during the fourth challenge.",
"Because the datasets are small, we perform 5-fold cross-validation and report the average performance across the five folds. We use the larger BioASQ4B dataset except when evaluating the ensemble and when comparing to participating systems of previous BioASQ challenges.",
"All models were implemented using TensorFlow BIBREF22 with a hidden size of 100. Because the context in BioASQ usually comprises multiple snippets, they are processed independently in parallel for each question. Answers from all snippets belonging to a question are merged and ranked according to their individual probabilities."
],
[
"Model ensembles are a common method to tweak the performance of a machine learning system. Ensembles combine multiple model predictions, for example by averaging, in order to improve generalization and prevent over-fitting. We evaluate the utility of an ensemble by training five models on the BioASQ3B dataset using 5-fold cross-validation. Each of the models is evaluated on the 4B test data, i.e., data which is not included in BioASQ3B.",
"During application, we run an ensemble by averaging the start and end scores of individual models before they are passed to the sigmoid / softmax functions as defined in Eq. 16 and 17 . In Table 2 we summarize the average performance of the five models, the best performance across the five models, and the performance of the ensemble. We observe performance gains of 3 percentage points on factoid questions and a less than 1 percentage point on list questions, relative to the best single model. This demonstrates a small performance gain that is consistent with the literature."
],
[
"Because the final results of the fifth BioASQ challenge are not available at the time of writing, we compare our system to the best systems in last year's challenge . For comparison, we use the best single model and the model ensemble trained on BioASQ3B (see Section \"Ensemble\" ). We then evaluate the model on the 5 batches of last year's challenge using the official BioASQ evaluation tool. Each batch contains 100 questions of which only some are factoid and list questions. Note that the results underestimate our system's performance, because our competing system's responses have been manually evaluated by humans while our system's responses are evaluated automatically using string matching against a potentially incomplete list of synonyms. In fact, our qualitative analysis in Section \"Qualitative Analysis\" shows that many answers are counted as incorrect, but are synonyms of the gold-standard answer. The results are summarized in Table 3 and compared to the best systems in the challenge in each of the batches and question type categories.",
"With our system winning four out of five batches on factoid questions, we consider it state-of-the-art in biomedical factoid question answering, especially when considering that our results might be higher on manual evaluation. The results on list questions are slightly worse, but still very competitive. This is surprising, given that the network never saw a list question prior to the fine-tuning phase. Due to small test set sizes, the sampling error in each batch is large, causing the single model to outperform the model ensemble on some batches."
],
[
"In order to get a better insight into the quality of the predictions, we manually validated the predictions for the factoid questions of batch 5 of the fourth BioASQ challenge as given by the best single model (see Table 3 ). There are in total 33 factoid questions, of which 23 have as the gold standard answer a span in one of the contexts. According to the official BioASQ evaluation, only 4 questions are predicted correctly (i.e., the gold standard answer is ranked highest). However, we identified 10 rank-1 answers which are not counted as correct but are synonyms to the gold standard answer. Examples include \"CMT4D disease\" instead of \"Charcot-Marie-Tooth (CMT) 4D disease\", \"tafazzin\" instead of \"Tafazzin (TAZ) gene\", and \" $\\beta $ -glucocerebrosidase\" instead of \"Beta glucocerebrosidase\". In total, we labeled 14 questions as correct and 24 questions as having their correct answer in the top 5 predictions.",
"In the following, we give examples of mistakes made by the system. Questions are presented in italics. In the context, we underline predicted answers and present correct answers in boldface.",
"We identified eight questions for which the semantic type of the top answer differs from the question answer type. Some of these cases are completely wrong predictions. However, this category also includes subtle mistakes like the following:",
"In which yeast chromosome does the rDNA cluster reside?",
"The rDNA cluster in Saccharomyces cerevisiae is located 450 kb from the left end and 610 kb from the right end of chromosome XII...",
"Here, it predicted a yeast species the rDNA cluster is located in, but ignored that the question is asking for a chromosome.",
"Another type of mistakes is that the top answer is somewhat correct, but is missing essential information. We labeled four predictions with this category, like the following example:",
"How early during pregnancy does non-invasive cffDNA testing allow sex determination of the fetus?",
"Gold Standard Answer: \"6th to 10th week of gestation\" or \"first trimester of pregnancy\"",
"Given Top Answer: \"6th-10th\"",
"In summary, to our judgment, 14 of 33 questions ( $42.4\\%$ ) are answered correctly, and 24 of 33 questions ( $72.7\\%$ ) are answered correctly in one of the top 5 answers. These are surprisingly high numbers considering low MRR score of $23.7\\%$ of the automatic evaluation (Table 3 )."
],
[
"The most significant result of this work is that state-of-the-art results in biomedical question answering can be achieved even in the absence of domain-specific feature engineering. Most competing systems require structured domain-specific resources, such as biomedical ontologies, parsers, and entity taggers. While these resources are available in the biomedical domain, they are not available in most domains.",
"Our system, on the other hand, requires a large open-domain QA dataset, biomedical word embeddings (which are trained in an unsupervised fashion), and a small biomedical QA dataset. This suggests that our methodology is easily transferable to other domains as well.",
"Furthermore, we explored several supervised domain adaptation techniques. In particular, we demonstrated the usefulness of forgetting cost for factoid questions. The decreased performance on list questions is not surprising, because the model's performance on those questions is very poor prior to fine-tuning which is due to the lack of list questions in SQuAD. We believe that large scale open-domain corpora for list questions would enhance performance further.",
"Unsupervised domain adaptation could be an interesting direction for future work, because the biomedical domain offers large amounts of textual data, some of which might even contain questions and their corresponding answers. We believe that leveraging these resources holds potential to further improve biomedical QA."
],
[
"In this paper, we described a deep learning approach to address the task of biomedical question answering by using domain adaptation techniques. Our experiments reveal that mere fine-tuning in combination with biomedical word embeddings yield state-of-the-art performance on biomedical QA, despite the small amount of in-domain training data and the lack of domain-dependent feature engineering. Techniques to overcome catastrophic forgetting, such as a forgetting cost, can further boost performance for factoid questions. Overall, we show that employing domain adaptation on neural QA systems trained on large-scale, open-domain datasets can yield good performance in domains where large datasets are not available."
],
[
"This research was supported by the German Federal Ministry of Education and Research (BMBF) through Software Campus project GeNIE (01IS12050)."
]
]
}
|
{
"question": [
"Among various transfer learning techniques, which technique yields to the best performance?"
],
"question_id": [
"f704d182c9e01a2002381b76bf21e4bb3c0d3efc"
],
"nlp_background": [
"five"
],
"topic_background": [
"unfamiliar"
],
"paper_read": [
"no"
],
"search_query": [
"question"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"108dd4f0f2f41d11b3e029d7a8a22d83896cb812"
],
"worker_id": [
"7dd5db428d7a43d2945b97c0c07fa56af4eb02ae"
]
}
]
}
|
{
"caption": [
"Figure 1: Network architecture of our system for biomedical question answering. At its core, it uses an extractive neural QA system as a black box (we use FastQA (Weissenborn et al., 2017)). The embedding layer is modified in order to include biomedical word embeddings and question type features. The output layer is adjusted to add the ability to answer list questions in addition to factoid questions.",
"Table 1: Comparison of various transfer learning techniques. In Experiment 1, the model was trained on BioASQ only. In Experiment 2, the model was trained on SQuAD and tested on BioASQ. We refer to it as the base model. In Experiment 3, the base model parameters were fine-tuned on the BioASQ training set. Experiments 4-5 evaluate the utility of domain dependent word vectors and features. Experiments 6-8 address the problem of catastrophic forgetting. All experiments have been conducted with the BioASQ4B dataset and 5-fold cross-validation.",
"Table 2: Performance of a model ensemble. Five models have been trained on the BioASQ3B dataset and tested on the 4B test questions. We report the average and best single model performances, as well as the ensemble performance.",
"Table 3: Comparison to systems on last year’s (fourth) BioASQ challenge for factoid and list questions. For each batch and question type, we list the performance of the best competing system, our single model and ensemble. Note that our qualitative analysis (Section 5.4) suggests that our factoid performance on batch 5 would be about twice as high if all synonyms were contained in the gold standard answers."
],
"file": [
"3-Figure1-1.png",
"6-Table1-1.png",
"7-Table2-1.png",
"8-Table3-1.png"
]
}
|
1908.11425
|
Classifying topics in speech when all you have is crummy translations.
|
Given a large amount of unannotated speech in a language with few resources, can we classify the speech utterances by topic? We show that this is possible if text translations are available for just a small amount of speech (less than 20 hours), using a recent model for direct speech-to-text translation. While the translations are poor, they are still good enough to correctly classify 1-minute speech segments over 70% of the time - a 20% improvement over a majority-class baseline. Such a system might be useful for humanitarian applications like crisis response, where incoming speech must be quickly assessed for further action.
|
{
"section_name": [
"Introduction",
"Methods ::: Speech-to-text translation.",
"Methods ::: Topic modeling and classification.",
"Experimental Setup ::: Data.",
"Experimental Setup ::: Fine-grained topic analysis.",
"Results ::: Spanish-English ST.",
"Results ::: Topic Modeling on training data.",
"Results ::: Topic classification on test data",
"Related work",
"Conclusions and future work",
"Acknowledgments",
"Using NMF for topic modeling",
"Using NMF for topic modeling ::: Text processing",
"Using NMF for topic modeling ::: Learning topics",
"Using NMF for topic modeling ::: Making topic predictions",
"Using NMF for topic modeling ::: Silver labels and evaluation",
"Fisher corpus: assigned topics",
"Tracking topic drift over conversations"
],
"paragraphs": [
[
"Quickly making sense of large amounts of linguistic data is an important application of language technology. For example, after the 2011 Japanese tsunami, natural language processing was used to quickly filter social media streams for messages about the safety of individuals, and to populate a person finder database BIBREF0. Japanese text is high-resource, but there are many cases where it would be useful to make sense of speech in low-resource languages. For example, in Uganda, as in many parts of the world, the primary source of news is local radio stations, which broadcast in many languages. A pilot study from the United Nations Global Pulse Lab identified these radio stations as a potentially useful source of information about a variety of urgent topics related to refugees, small-scale disasters, disease outbreaks, and healthcare BIBREF1. With many radio broadcasts coming in simultaneously, even simple classification of speech for known topics would be helpful to decision-makers working on humanitarian projects.",
"Recent research has shown that it is possible train direct Speech-to-text Translation (ST) systems from speech paired only with translations BIBREF2, BIBREF3, BIBREF4. Since no transcription is required, this could be useful in very low-resource settings, even for languages with no writing systems. In realistic low-resource settings where only a few hours of training data is available, these systems produce poor translations BIBREF5, but it has long been recognized that there are good uses for bad translations BIBREF6. Could classifying the original speech be one of those uses?",
"We answer this question affirmatively: using ST to translate speech to text, we then classify by topic using supervised models (Figure FIGREF1). We test our method on a corpus of conversational Spanish speech paired with English text translations. Using an ST model trained on 20 hours of Spanish-English data, we are able to predict topics correctly 71% of the time. With even worse ST, we can still predict topics with an accuracy of 61%."
],
[
"We use the method of BIBREF5 to train neural sequence-to-sequence Spanish-English ST models. As in that study, before training ST, we pre-train the models using English ASR data from the Switchboard Telephone speech corpus BIBREF7, which consists of around 300 hours of English speech and transcripts. This was reported to substantially improve translation quality when the training set for ST was only tens of hours."
],
[
"To classify the translated documents, we first need a set of topic labels, which were not already available for our dataset. So, we initially discover a set of topics from the target-language training text using a topic model. To classify the translations of the test data, we choose the most probable topic according to the learned topic model. To train our topic model, we use Nonnegative Matrix Factorization BIBREF8, BIBREF9."
],
[
"We use the Fisher Spanish speech corpus BIBREF11, which consists of 819 phone calls, with an average duration of 12 minutes, amounting to a total of 160 hours of data. We discard the associated transcripts and pair the speech with English translations BIBREF12, BIBREF13. To simulate a low-resource scenario, we sampled 90 calls (20h) of data (train20h) to train both ST and topic models, reserving 450 calls (100h) to evaluate topic models (eval100h). Our experiments required ST models of varying quality, so we also trained models with decreasing amounts of data: ST-10h, ST-5h, and ST-2.5h are trained on 10, 5, and 2.5 hours of data respectively, sampled from train20h. To evaluate ST only, we use the designated Fisher test set, as in previous work."
],
[
"In the Fisher protocol, callers were prompted with one of 25 possible topics. It would seem appealing to use the prompts as topic labels, but we observed that many conversations quickly departed from the initial prompt and meandered from topic to topic. For example, one call starts: “Ok today's topic is marriage or we can talk about anything else...”. Within minutes, the topic shifts to jobs: “I'm working oh I do tattoos.” To isolate different topics within a single call, we split each call into 1 minute long segments to use as `documents'. This gives us 1K training and 5.5K test segments, but leaves us with no human-annotated topic labels for them.",
"Obtaining gold topic labels for our data would require substantial manual annotation, so we instead use the human translations from the 1K (train20h) training set utterances to train the NMF topic model with scikit-learn BIBREF14, and then use this model to infer topics on the evaluation set. These silver topics act as an oracle: they tell us what a topic model would infer if it had perfect translations. NMF and model hyperparameters are described in Appendix SECREF7.",
"To evaluate our ST models, we apply our ST model to test audio, and then predict topics from the translations using the NMF model trained on the human translations of the training data (Figure FIGREF1). To report accuracy we compare the predicted labels and silver labels, i.e., we ask whether the topic inferred from our predicted translation (ST) agrees with one inferred from a gold translation (human)."
],
[
"To put our topic modeling results in context, we first report ST results. Figure FIGREF9 plots the BLEU scores on the Fisher test set and on eval100h for Spanish-English ST models. The scores are very similar for both sets when computed using a single human reference; scores are 8 points higher on the Fisher test set if all 4 of its available references are used. The state-of-the-art BLEU score on the Fisher test set is 47.3 (using 4 references), reported by BIBREF3, who trained an ST model on the entire 160 hours of data in the Fisher training corpus. By contrast, 20 hour model (ST-20h) achieves a BLEU score of 18.1. Examining the translations (Table TABREF10), we see that while they are mediocre, they contain words that might enable correct topic classification."
],
[
"Turning to our main task of classification, we first review the set of topics discovered from the human translations of train20h (Table TABREF13). We explored different numbers of topics, and chose 10 after reviewing the results. We assigned a name to each topic after manually reviewing the most informative terms; for topics with less coherent sets of informative terms, we include misc in their names.",
"We argued above that the silver labels are sensible for evaluation despite not always matching the assigned call topic prompts, since they indicate what an automatic topic classifier would predict given correct translations and they capture finer-grained changes in topic. Table TABREF14 shows a few examples where the silver labels differ from the assigned call topic prompts. In the first example, the topic model was arguably incorrect, failing to pick up the prompt juries, and instead focusing on the other words, predicting intro-misc. But in the other examples, the topic model is reasonable, in fact correctly identifying the topic in the third example where the transcripts indicate that the annotation was wrong (specifying the topic prompt as music). The topic model also classifies a large proportion of discussions as intro-misc (typically at the start of the call) and family-misc (often where the callers stray from their assigned topic).",
"Our analysis also supports our observation that discussed topics stray from the prompted topic in most speech segments. For example, among segments in the 17 training data calls with the prompt religion, only 36% have the silver label religion, and the most frequently assigned label is family-misc with 46%. Further details are in Appendix SECREF9."
],
[
"Now we turn to our main experiment. For each of the audio utterances in eval100h, we have four ST model translations: ST-2.5h, 5h, 10h, 20h (in increasing order of quality). We feed each of these into the topic model from Table TABREF13 to get the topic distribution and use the highest scoring topic as the predicted label.",
"Figure FIGREF16 compares the frequencies of the silver labels with the predictions from the ST-20h model. The family-misc topic is predicted most often—almost 50% of the time. This is reasonable since this topic includes words associated with small talk. Other topics such as music, religion and welfare also occur with a high enough frequency to allow for a reasonable evaluation.",
"Figure FIGREF17 shows the accuracy for all ST models, treating the silver topic labels as the correct topics. We use the family-misc topic as a majority class naive baseline, giving an accuracy of 49.6%. We observe that ST models trained on 10 hours or more of data outperform the naive-baseline by more than 10% absolute, with ST-20h scoring 71.8% and ST-10h scoring 61.6%. Those trained on less than 5 hours of data score close to or below that of the naive baseline: 51% for ST-5h and 48% for ST-2.5h.",
"Since topics vary in frequency, we look at label-specific accuracy to see if the ST models are simply predicting frequent topics correctly. Figure FIGREF18 shows a normalized confusion matrix for the ST-20h model. Each row sums to 100%, representing the distribution of predicted topics for any given silver topic, so the numbers on the diagonal can be interpreted as the topic-wise recall. For example, a prediction of music recalls 88% of the relevant speech segments. We see that the model has an recall of more than 50% for all 10 topics, making it quite effective for our motivating task. The family-misc topic (capturing small-talk) is often predicted when other silver topics are present, with e.g. 23% of the silver dating topics predicted as family-misc."
],
[
"We have shown that even low-quality ST can be useful for speech classification. Previous work has also looked at speech analysis without high-quality ASR. In a task quite related to ours, BIBREF15 showed how to cluster speech segments in a completely unsupervised way. In contrast, we learn to classify speech using supervision, but what is important about our result is it shows that a small amount of supervision goes a long way. A slightly different approach to quickly analysing speech is the established task of Keyword spotting BIBREF16, BIBREF17, which simply asks whether any of a specific set of keywords appears in each segment. Recent studies have extended the early work to end-to-end keyword spotting BIBREF18, BIBREF19 and to semantic keyword retrieval, where non-exact but relevant keyword matches are retrieved BIBREF20, BIBREF21, BIBREF22. In all these studies, the query and search languages are the same, while we consider the cross-lingual case.",
"There has been some limited work on cross-lingual keyword spotting BIBREF23, where ASR is cascaded with text-based cross-lingual retrieval. Some recent studies have attempted to use vision as a complementary modality to do cross-lingual retrieval BIBREF24, BIBREF25. But cross-lingual topic classification for speech has not been considered elsewhere, as far as we know."
],
[
"Our results show that poor speech translation can still be useful for speech classification in low-resource settings. By varying the amount of training data, we found that translations with a BLEU score as low as 13 are still able to correctly classify 61% of the speech segments.",
"Cross-lingual topic modeling may be useful when the target language is high-resource. Here, we learned target topics just from the 20 hours of translations, but in future work, we could use a larger text corpus in the high-resource language to learn a more general topic model covering a wider set of topics, and/or combine it with keyword lists curated for specific scenarios like disaster recovery BIBREF26."
],
[
"This work was supported in part by a James S McDonnell Foundation Scholar Award and a Google faculty research award. We thank Ida Szubert, Marco Damonte, and Clara Vania for helpful comments on previous drafts of this paper.",
""
],
[
"We now describe how we learn topics using NMF. Given a set of text documents as input, the model will output (1) for each document, a distribution over the selected number of topics (henceforth, the document-topic distribution), and (2) for each topic, a distribution over the set of unique terms in the text (henceforth, the topic-term distribution)."
],
[
"Our training set (train20h) has 1080 English sentences. We start by generating a tf-idf representation for each of these. The English text contains 170K tokens and 6K terms (vocabulary size). As we are looking for topics which are coarse-level categories, we do not use the entire vocabulary, but instead focus only on the high importance terms. We lowercase the English translations and remove all punctuation, and stopwords. We further remove the terms occurring in more than 10% of the documents and those which occur in less than 2 documents, keeping only the 1000 most frequent out of the remaining.",
"After preprocessing the training set, we have a feature matrix $V$ with dimensions $1080\\times 1000$, where each row is a document, and each column represents the tf-idf scores over the 1000 selected terms. The feature matrix will be sparse as only a few terms would occur in a document, and will also be non-negative as tf-idf values are greater than or equal to 0."
],
[
"NMF is a matrix factorization method, which given the matrix $V$, factorizes it into two matrices: $W$ with dimensions $1080\\times t$ (long-narrow), and $H$ with dimensions $t\\times 1000$ (short-wide), where $t$ is a hyper-parameter. Figure FIGREF21 shows this decomposition when $t$ is set to 10.",
"In the context of topic modeling, $t$ is the number of topics we want to learn; $W$ is the document-topic distribution, where for each document (row) the column with the highest value is the most-likely topic; and $H$ is the topic-term distribution, where each row is a topic, and the columns with the highest values are terms most relevant to it.",
"The values for $W$ and $H$ are numerically approximated using a multiplicative update rule BIBREF27, with the Frobenius norm of the reconstruction error as the objective function. In this work, we use the machine-learning toolkit scikit-learn BIBREF14 for feature extraction, and to perform NMF, using default values as described at scikit-learn.org."
],
[
"Using our topic-term distribution matrix $H$, we can now make topic predictions for new text input. Our evaluation set (eval100h) has 5376 English sentences. For each of these, we have the gold text, and also the ST model output. We preprocess and represent these using the same procedure as before (SECREF19) giving us the feature matrix $V^{^{\\prime }}_{gold}$ for gold, and $V^{^{\\prime }}_{ST}$ for ST output, each with dimensions $5376\\times 1000$. Our goal is to learn the document-topic distributions $W^{^{\\prime }}_{gold}$ and $W^{^{\\prime }}_{ST}$, where:",
"The values for each $W^{^{\\prime }}$ matrix are again numerically approximated using the same objective function as before, but keeping $H$ fixed."
],
[
"We use the highest scoring topic for each document as the prediction. The silver labels are therefore computed as $argmax(W^{^{\\prime }}_{gold})$, and for ST as $argmax(W^{^{\\prime }}_{ST})$. We can now compute the accuracy over these two sets of predictions."
],
[
"Figure FIGREF24 shows the topics assigned to callers in the Fisher speech corpus. Some topic prompts overlap, for example, music-preference asks callers to discuss what kind of music they like to listen to, and music-social-message asks them to discuss the social impact of music. For both these topics, we would expect the text to contain similar terms. Similarly the topics cellphones-usage, tech-devices and telemarketing-spam also overlap. Such differences might be difficult for an unsupervised topic modeling algorithm to pick up.",
"Table TABREF25 shows the topics learned by NMF by using human English translations from the entire 160 hours of training data as input, when the number of topics is set to 25. We observe that some new topics are found that were not discovered by the 20hr/10-topic model and that match the assigned topic prompts, such as juries and housing. However, there are also several incoherent topics, and we don't find a major improvement over the topics learned by just using 20 hours of training data, with the number of topics set to 10."
],
[
"To measure how often speakers stray from assigned topic prompts, we take a closer look at the calls in train20h with the assigned prompt of religion. This is the most frequently assigned prompt in the Fisher dataset (17 calls in train20h). We also select this topic for further analysis as it contains terms which are strongly indicative, such as god, bible, etc. and should be relatively easier for our topic model to detect.",
"Figure FIGREF26 shows the trend of discussion topics over time. Overall, only 36% of the total dialog segments in these calls have the silver label religion, and the most frequently assigned label is family-misc with 46%. We observe that the first segment is often labeled as intro-misc, around 70% of the time, which is expected as speakers begin by introducing themselves. Figure FIGREF26 shows that a similar trend emerges for calls assigned the prompt music (14 calls in train20h). Silver labels for music account for 45% of the call segments and family-misc for around 38%."
]
]
}
|
{
"question": [
"What is the architecture of the model?",
"What language do they look at?"
],
"question_id": [
"da544015511e535503dee2eaf4912a5e36c806cd",
"7bc993b32484d6ae3c86d0b351a68e59fd2757a5"
],
"nlp_background": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no"
],
"search_query": [
"",
""
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"BIBREF5 to train neural sequence-to-sequence",
"NMF topic model with scikit-learn BIBREF14"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We use the method of BIBREF5 to train neural sequence-to-sequence Spanish-English ST models. As in that study, before training ST, we pre-train the models using English ASR data from the Switchboard Telephone speech corpus BIBREF7, which consists of around 300 hours of English speech and transcripts. This was reported to substantially improve translation quality when the training set for ST was only tens of hours.",
"Obtaining gold topic labels for our data would require substantial manual annotation, so we instead use the human translations from the 1K (train20h) training set utterances to train the NMF topic model with scikit-learn BIBREF14, and then use this model to infer topics on the evaluation set. These silver topics act as an oracle: they tell us what a topic model would infer if it had perfect translations. NMF and model hyperparameters are described in Appendix SECREF7."
],
"highlighted_evidence": [
"We use the method of BIBREF5 to train neural sequence-to-sequence Spanish-English ST models.",
"Obtaining gold topic labels for our data would require substantial manual annotation, so we instead use the human translations from the 1K (train20h) training set utterances to train the NMF topic model with scikit-learn BIBREF14"
]
}
],
"annotation_id": [
"efa0c448e59f1d6ea924445e98dde8cb52e9079d"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Spanish"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We use the Fisher Spanish speech corpus BIBREF11, which consists of 819 phone calls, with an average duration of 12 minutes, amounting to a total of 160 hours of data. We discard the associated transcripts and pair the speech with English translations BIBREF12, BIBREF13. To simulate a low-resource scenario, we sampled 90 calls (20h) of data (train20h) to train both ST and topic models, reserving 450 calls (100h) to evaluate topic models (eval100h). Our experiments required ST models of varying quality, so we also trained models with decreasing amounts of data: ST-10h, ST-5h, and ST-2.5h are trained on 10, 5, and 2.5 hours of data respectively, sampled from train20h. To evaluate ST only, we use the designated Fisher test set, as in previous work."
],
"highlighted_evidence": [
"We use the Fisher Spanish speech corpus BIBREF11, which consists of 819 phone calls, with an average duration of 12 minutes, amounting to a total of 160 hours of data."
]
}
],
"annotation_id": [
"10d24790c198f005fc03b620b2f5a825d1268226"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
}
|
{
"caption": [
"Figure 1: Spanish speech is translated to English text, and a classifier then predicts its topic.",
"Figure 2: BLEU scores for Spanish-English ST models computed on Fisher test set, using all 4 human references available, and using only 1 reference, and on eval100h, for which we have only 1 human reference.",
"Table 1: Examples of Spanish audio shown as Spanish text. An ST system translates the audio into English text, and we give the human reference. Our task is to predict the topic of discussion in the audio, which are potentially signaled by the underlined words.",
"Table 3: Example audio utterances from eval100h. We show a part of the human translation here. Assigned is the topic assigned to speakers in the current call to prompt discussion. Silver is topic inferred by feeding the human translation through the topic model.",
"Table 2: Topics discovered using human translated text from train20h, with manually-assigned topic names.",
"Figure 3: Distribution of topics predicted for the 5K audio utterances in eval100h. silver labels are predicted using human translations. The ST model has been trained on 20 hours of Spanish-English data.",
"Figure 4: Accuracy of topic prediction using ST model output. The naive baseline is calculated using majority class prediction, which is the topic family-misc.",
"Figure 5: Confusion matrix for ST model trained on 20 hours of Spanish-English data. Each cell represents the percentage of the silver topic labels predicted as the x-axis label, with each row summing to 100%.",
"Figure 6: Nonnegative Matrix Factorization. V is the document-term matrix, where d is each document; N is the number of documents; w1 to w1000 are the terms selected as features; and t1 to t10 are the topics.",
"Table 4: Topics discovered using human translated text from the full 160hr Fisher training set. We set the number of topics to 25. We assign the topic names manually, and use — where the topic clustering is not very clear.",
"Figure 7: Topics assigned to callers in the Fisher dataset, as a percentage of the 819 calls.",
"Figure 8: Tracking silver labels over time for calls where the assigned prompt is religion. Total of 17 calls in train20h.",
"Figure 9: Tracking silver labels over time for calls where the assigned prompt is music. Total of 14 calls in train20h."
],
"file": [
"1-Figure1-1.png",
"2-Figure2-1.png",
"3-Table1-1.png",
"3-Table3-1.png",
"3-Table2-1.png",
"3-Figure3-1.png",
"4-Figure4-1.png",
"4-Figure5-1.png",
"7-Figure6-1.png",
"8-Table4-1.png",
"8-Figure7-1.png",
"9-Figure8-1.png",
"9-Figure9-1.png"
]
}
|
1711.04457
|
Word, Subword or Character? An Empirical Study of Granularity in Chinese-English NMT
|
Neural machine translation (NMT), a new approach to machine translation, has been proved to outperform conventional statistical machine translation (SMT) across a variety of language pairs. Translation is an open-vocabulary problem, but most existing NMT systems operate with a fixed vocabulary, which causes the incapability of translating rare words. This problem can be alleviated by using different translation granularities, such as character, subword and hybrid word-character. Translation involving Chinese is one of the most difficult tasks in machine translation, however, to the best of our knowledge, there has not been any other work exploring which translation granularity is most suitable for Chinese in NMT. In this paper, we conduct an extensive comparison using Chinese-English NMT as a case study. Furthermore, we discuss the advantages and disadvantages of various translation granularities in detail. Our experiments show that subword model performs best for Chinese-to-English translation with the vocabulary which is not so big while hybrid word-character model is most suitable for English-to-Chinese translation. Moreover, experiments of different granularities show that Hybrid_BPE method can achieve best result on Chinese-to-English translation task.
|
{
"section_name": [
"Introduction",
"Neural Machine Translation",
"Description of Different Translation Granularities",
"Character Level",
"Hybrid Word-Characters Level",
"Subword Level",
"Dataset",
"Training Details",
"Data Segmentation",
"Results on Chinese-to-English Translation",
"Results on English-to-Chinese Translation",
"Related Work",
"Conclusion",
"Acknowledgments"
],
"paragraphs": [
[
"Neural machine translation (NMT) proposed by Kalchbrenner and Blunsom BIBREF0 and Sutskever et al. BIBREF1 has achieved significant progress in recent years. Unlike traditional statistical machine translation(SMT) BIBREF2 , BIBREF3 , BIBREF4 which contains multiple separately tuned components, NMT builds an end-to-end framework to model the entire translation process. For several language pairs, NMT has already achieved better translation performance than SMT BIBREF5 , BIBREF6 .",
"Conventional NMT system limits the vocabulary to a modest-sized vocabulary in both sides and words out of vocabulary are replaced by a special UNK symbol. However, the process of training and decoding is often conducted on an open vocabulary, in which an obvious problem is that NMT model is incapable of translating rare words. In particular, if a source word is outside the source vocabulary or its translation is outside the target vocabulary, the model is unable to generate proper translation for this word during decoding. Both Sutskever et al. BIBREF1 and Bahdanau et al. BIBREF7 have observed that sentences with many out-of-vocabulary words tend to be translated much more poorly than sentences mainly containing frequent words.",
"To address this problem, many researchers propose a broad category of approaches by employing different translation granularities. Most of these are below the word level, e.g. characters BIBREF8 , hybrid word-characters BIBREF9 , BIBREF5 , and more intelligent subwords BIBREF10 , BIBREF5 . Besides, pioneering studies BIBREF5 , BIBREF6 demonstrate that translation tasks involving Chinese are some of the most difficult problems in NMT systems. However, there is no study that shows which translation granularity is suitable for Chinese-to-English and English-to-Chinese translation tasks.",
"In this work, we make an empirical comparison of different translation granularities for bidirectional English-Chinese translation tasks. In addition, we analyze the impact of these strategies on the translation results in detail. We demonstrate that Chinese-to-English NMT of 15k and 30k vocabulary size can acquire best results using subword model and with 60k vocabulary size hybrid word-character model obtains the highest performance, while hybrid word-character model is most suitable for English-to-Chinese translation. Our experiment shows that all subword methods are not bounded by the vocabulary size. Furthermore, we carry out the experiments that employ different translation granularities of source side and target side. The translation result shows that when the source granularity is hybrid word-character level and the target sentences are split into subword level by BPE method, it can achieve the best translation performance for Chinese-to-English translation task. As for English-to-Chinese translation task, Hybrid word-character model is most suitable. To the best of our knowledge, this is the first work on an empirical comparison of various translation granularities for bidirectional Chinese-English translations."
],
[
"Our models are based on an encoder-decoder architecture with attention mechanism proposed by Luong et al. BIBREF11 , which utilizes stacked LSTM layers for both encoder and decoder as illustrated in Figure FIGREF1 . In this section, we make a review of NMT framework.",
"First, the NMT encodes the source sentence INLINEFORM0 into a sequence of context vector representation INLINEFORM1 . Then, the NMT decodes from the context vector representation INLINEFORM2 and generates target translation INLINEFORM3 one word each time by maximizing the probability of INLINEFORM4 . Next, We review the encoder and decoder frameworks briefly.",
"Encoder: The context vector representation INLINEFORM0 is generated by the encoder using INLINEFORM1 stacked LSTM layers. Bi-directional connections are used for the bottom encoder layer, and INLINEFORM2 is a concatenation vector as shown in Eq. (1): DISPLAYFORM0 ",
"All other encoder layers are unidirectional, and INLINEFORM0 is calculated as follows: DISPLAYFORM0 ",
"Decoder: The conditional probability INLINEFORM0 is formulated as DISPLAYFORM0 ",
"Specifically, we employ a simple concatenation layer to produce an attentional hidden state INLINEFORM0 : DISPLAYFORM0 ",
"where INLINEFORM0 denotes the target hidden state at the top layer of a stacking LSTM. The attention model calculates INLINEFORM1 as the weighted sum of the source-side context vector representation, just as illustrated in the upper left corner of Figure FIGREF1 . DISPLAYFORM0 ",
"where INLINEFORM0 is a normalized item calculated as follows: DISPLAYFORM0 ",
" INLINEFORM0 is computed by using the following formula: DISPLAYFORM0 ",
"If INLINEFORM0 , INLINEFORM1 will be calculated by combining INLINEFORM2 as feed input BIBREF11 : DISPLAYFORM0 ",
"Given the bilingual training data INLINEFORM0 , all parameters of the attention-based NMT are optimized to maximize the following conditional log-likelihood: DISPLAYFORM0 "
],
[
"We revisit how the source and target sentences ( INLINEFORM0 and INLINEFORM1 ) are represented in NMT. For the source side of any given training corpus, we scan through the whole corpus to build a vocabulary INLINEFORM2 of unique tokens. A source sentence INLINEFORM3 is then built as a sequence of the integer indices. The target sentence is similarly transformed into a target sequence of integer indices.",
"The property of NMT allows us great freedom in the choice of token units, and we can segment sentences in different ways. In this section, we will elaborate on four proposed approaches about the choice of translation granularities."
],
[
"This translation granularity is easy to implement. For this granularity, what we have to do is split the sentence into a sequence of characters. However, the character-level modeling on the English side is more challenging, as the network has to be able to deal with long and coherent sequence of characters. In this case, the number of characters is often 300 INLINEFORM0 1000 symbols long, where the size of the state space grows exponentially. Therefore, this is a great challenge for us to handle.",
"Besides, the alphabet of English is only consist of 26 letters, in which the vocabulary of English side is too small. Considering these facts, we only separate the Chinese side sentences into characters rather than both sides. Figure FIGREF11 shows an example of this translation granularity for character level."
],
[
"In regular word-based NMT, for all words outside the source vocabulary, one feeds the universal embedding representing UNK as input to the encoder. This is problematic because it discards valuable information about the source word. To address that, hybrid word-character approach will be adopted. In this part, we will introduce this granularity in detail.",
"Unlike in the conventional word model where out-of-vocabulary words are collapsed into a single UNK symbol, we convert these words into the sequence of constituent characters. Special prefixes are prepended to the characters. The purpose of the prefixes is to show the location of the characters in a word, and to distinguish them from normal in-vocabulary characters. There are three prefixes: INLINEFORM0 B INLINEFORM1 , INLINEFORM2 M INLINEFORM3 , and INLINEFORM4 E INLINEFORM5 , indicating beginning of the word, middle of the word and end of the word, respectively. During decoding, the output may also contain sequences of special tokens. With the prefixes, it is trivial to reverse the tokenization to the original words as part of a post-processing step. Using this approach, in Figure FIGREF11 , we can see the word “龙年” is segmented into “ INLINEFORM6 B INLINEFORM7 龙 INLINEFORM8 E INLINEFORM9 年”, and the word “繁花似锦” is segmented into “ INLINEFORM10 B INLINEFORM11 繁 INLINEFORM12 M INLINEFORM13 花 INLINEFORM14 M INLINEFORM15 似 INLINEFORM16 E INLINEFORM17 锦”."
],
[
"Considering languages with productive word formation processes such as agglutination and compounding, translation models require mechanisms that segment the sentence below the word level (In this paper, we call this level of symbols as subword units). In this part, we will introduce the two different methods of translation granularity on subword level.",
"Byte pair encoding (BPE) BIBREF12 is a compression algorithm. This simple data compression technique iteratively replaces the most frequent pair of bytes in a sequence with a single, unused byte. This compression method is first introduced into translation granularity by Sennrich et al. BIBREF10 . In this approach, instead of merging frequent pairs of bytes, characters or character sequences will be merged.",
"A detailed introduction of algorithm in learning BPE operations is showed in Sennrich et al. BIBREF10 . During decoding time, each word first split into sequences of characters, then learned operation will be applied to merge the characters into larger, known symbols. For BPE method, a special symbol is also needed to indicate the merging position. In Figure FIGREF11 , the word “繁花似锦” is segmented into three subword units, and the first three units are appended a special suffix “@@”. In decoding step, the translation results contain the special tokens as well. With these suffixes, we can recover the output easily.",
"The wordpiece model (WPM) implementation is initially developed to solve a Japanese/Korean segmentation problem for the speech recognition system BIBREF13 . This approach is completely data-driven and guaranteed to generate a deterministic segmentation for any possible sequence of characters, which is similar to the above method.",
"The wordpiece model is generated using a data-driven approach to maximize the language-model likelihood of the training data, given an evolving word definition. The training method of WPM is described in more detail in Schuster and Nakajima BIBREF13 . As shown in Figure FIGREF11 , a special symbol is only prepended at the beginning of the words. In this case, the words “龙年”, “繁花似锦”, “洋溢” and “祥和” are split into subwords, and the rest words remain the same except for a special prefix “_”."
],
[
"We perform all these translation granularities on the NIST bidirectional Chinese-English translation tasks. The evaluation metric is BLEU BIBREF14 as calculated by the multi-bleu.perl script.",
"Our training data consists of 2.09M sentence pairs extracted from LDC corpus. Table 1 shows the detailed statistics of our training data. To test different approaches on Chinese-to-English translation task, we use NIST 2003(MT03) dataset as the validation set, and NIST 2004(MT04), NIST 2005(MT05), NIST 2006(MT06) datasets as our test sets. For English-to-Chinese translation task, we also use NIST 2003(MT03) dataset as the validation set, and NIST 2008(MT08) will be used as test set."
],
[
"We build the described models modified from the Zoph_RNN toolkit which is written in C++/CUDA and provides efficient training across multiple GPUs. Our training procedure and hyper parameter choices are similar to those used by Luong et al. BIBREF11 . In the NMT architecture as illustrated in Figure FIGREF1 , the encoder has three stacked LSTM layers including a bidirectional layer, followed by a global attention layer, and the decoder contains two stacked LSTM layers followed by the softmax layer.",
"The word embedding dimension and the size of hidden layers are all set to 1000. We limit the maximum length in training corpus to 120. Parameter optimization is performed using both stochastic gradient descent(SGD) method and Adam method BIBREF15 . For the first three epoches, We train using the Adam optimizer and a fixed learning rate of 0.001 without decay. For the remaining six epoches, we train using SGD, and we set learning rate to 0.1 at the beginning and halve the threshold while the perplexity go up on the development set. We set minibatch size to 128. Dropout was also applied on each layer to avoid over-fitting, and the dropout rate is set to 0.2. At test time, we employ beam search with beam size b = 12."
],
[
"For Chinese word segmentation, we use our in-house segmentation tools. For English corpus, the training data is tokenized with the Moses tokenizer. We carry out Chinese-to-English translation experiment on 30k vocabulary and 15k vocabulary for both sides respectively, and we also conduct English-to-Chinese translation experiment on 30k vocabulary size. The word level translation granularity is set to our baseline method.",
"For character level, we only segment the Chinese sentences into characters and the English sentences remain the same. For hybrid word-characters level, we segment training corpus for both sides. We rank the word frequency from greatest to least in training corpus, and in order to prevent the pollution from the very rare word, we have to set a segmentation point relatively higher. For 30k vocabulary, the word frequency below 64 is segmented into characters on Chinese side, and the segmentation point is set to 22 on English side. For 15k vocabulary, we set the segmentation point to 350 and 96 on Chinese side and English side respectively. For 60k vocabulary, the frequency of Chinese words below 14 and that of English words below 6 are split into characters.",
"For subword level, two different approaches are used. In BPE method, the number of merge operations is set to 30000 on 30k vocabulary size, 15000 on 15k vocabulary size and 60000 on 60k vocabulary size. For Chinese sentences, we segment the training corpus using our in-house segmentation tools first, and then we can apply the BPE method same as English sentences. Considering the essence of WPM method, we do not have to segment words for Chinese and tokenize sentences for English. That is to say, we can train the WPM without pre-processing step. Hence, for WPM method, we conduct our experiments both on the sentences trained on the raw corpus and the sentences trained on the segmented corpus."
],
[
"We list the BLEU scores of different translation granularities on 30k vocabulary in Table TABREF27 .",
"Row 1 is translation result of the state-of-the-art NMT system with word level. For the character level granularity (Row 2), the translation quality is higher than the word level by only 0.38 BLEU points. The last three lines in Table TABREF27 are subword level translation granularity, which contains BPE method and WPM method. BPE method (Row 4) achieves the best translation performance, which gets an improvement of 1.64 BLEU points over the word level. As for the WPM method (Row 6), the gap between this method and BPE method is narrow. Moreover, hybrid word-character level model (Row 3) outperforms the word level by 1.46 BLEU points, and translation quality of this method is very close to the BPE method. Experiments show that hybrid word-character level granularity and BPE method of subword level granularity are our choices for translation granularity on Chinese-to-English translation task.",
"We execute different translation granularities on the training corpus. To make a comparison, We randomly choose 10000 sentences. Table TABREF29 show the average sentence length of different methods on all granularities.",
"A well-known flaw of NMT model is the inability to properly translate long sentences. However, most of translation granularities will go below the word level. Therefore, as shown in Table TABREF29 , we can get longer sentences than the word level. We wonder what the translation performance of different lengths are on all translation granularities. We follow Bahdanau et al. BIBREF7 to group sentences of similar lengths together and compute a BLEU score per group, as demonstrated in Figure FIGREF30 .",
"In order to make the comparison fair, length refers to the number of tokens split in word level. As above mentioned, hybrid word-character level model is one of suitable granularity choices for Chinese-to-English translation. We can find when the length of sentences is below 20, the translation result of this model outperforms the other models to a great extent. But with the length going up, the advantage over other models is diminishing. The character level granularity performs bad for the sentences whose length are below 20. We think the reason may be that when the sentences are short, the representation of sentence in character level cannot express the sentence meaning well. As for BPE method, we find a strange phenomenon. When the number of words in source sentence is from 60 to 80, the translation performance of BPE method is not so good. However, this method can achieve almost 3 BLEU points higher than next-best approach when the source sentence is longer than 80 words. As shown in Figure FIGREF30 , we can see WPM method does not perform well lower than 60 words in source language. But when the length of sentences is between 60 and 80, this method even outperforms the BPE method by up to 5.51 BLEU points. In this experiment, we conclude that subword model is more effective than other models in handling long sentences.",
"We concern what the translation results of different translation granularities are on smaller vocabulary size. We also carry out the experiment on Chinese-to-English task of 15k vocabulary size.",
"Compared to 30k vocabulary size, the translation performance of word level (Row 1) on 15k vocabulary size is reduced by 2.14 BLEU points. However, character level (Row 2) and hybrid word-character level (Row 3) achieve 42.09 and 43.12 BLEU points respectively, which is on par with quality of translation on 30k vocabulary. Both these two models exceed word level to a great extent. We infer the reason is that both character level and hybrid word-character level can represent source side and target side sentences better than the word level even if the vocabulary size is small. For subword model, translation performance of these methods remain almost the same as 30k vocabulary, which is beyond our imagination. We can find in Table TABREF32 , WPM method (Row 6) outperforms other models, and to our surprise, translation results of both WPM method and WPM methods with raw corpus (Row 5) obtain a higher BLEU points than 30k vocabulary size. We analyze the reason of this phenomenon is that the subword model is not constrained by the vocabulary size. Although the WPM method achieves the best results for the 15k vocabulary size, this method also belongs to subword level translation granularity. We can conclude that subword translation granularity is more suitable for Chinese-to-English translation task.",
"In order to make a comparison of these translation granularities on larger vocabulary size, we perform the our experiment of 60k vocabulary size on Chinese-to-English translation task.",
"We can find in Table TABREF34 , the word and character level (Row 1 and Row 2) on 60k vocabulary size are increased by 1.15 and 1.11 BLEU points respectively compared to 30 vocabulary size. However, to our surprise, all the translation results of subword level granularities on 60k vocabulary are below to the 30k vocabulary size. With the increase of vocabulary size, we add more fine-grained subword segmentation units into vocabulary. We infer that large amount of subword units do not have beneficial effect on the translation results. As for hybrid word-character level, this method achieves 43.97 BLEU points, which is highest among all the translation granularities on 60k vocabulary size. Compared with Table TABREF27 , hybrid word-character level outperforms the best translation result on 30k vocabulary size (BPE method) by 0.22 BLEU points.",
"We also conduct experiments that we use different translation granularities on source and target side. In order to carry out the experiments easily, we only compare several granularities pairs.",
"In Table TABREF36 , we can find that when the source translation granularity is word level (Row 2 and Row 3), the translation performances are relative poor, even worse than the word level of both sides in Table TABREF27 . As for BPE method on source side, the hybrid word-character on target side obtains 43.73 BLEU points (Row 6), which is close to the best translation result in Table TABREF27 . Hybrid_BPE method achieves up to 44.26 BLEU points (Row 4), which is even higher than BPE method by up to 0.51 BLEU points. This method can acquire best translation result for Chinese-to-English translation task."
],
[
"We evaluate different translation granularities on the English-to-Chinese translation tasks, whose results are presented in Table TABREF39 .",
"We find that hybrid word-character level (Row 3) granularity obtains significant accuracy improvements over word level and this granularity is also superior to other granularities on large-scale English-to-Chinese translation. BPE method (Row 4) in this task does not perform well as Chinese-to-English task, the translation quality of it is lower than hybrid word-character model by up to 0.97 BLEU points. However, another subword level translation granularity WPM method (Row 6) achieves 22.14 BLEU points, which is near the hybrid word-character level. Although the vocabulary of character level on Chinese side is only 7.2k, it can also obtain 19.64 BLEU points (Row 2), which is on par with translation performance of word level.",
"As Chinese-to-English translation task, we carry out experiments on English-to-Chinese translation for different granularities. According to Table TABREF36 , Hybrid_BPE and BPE_Hybrid methods acquire relative higher translation quality than other methods. Therefore, in this section we only use these two methods to test which is most suitable for English-to-Chinese translation task.",
"Table TABREF41 shows that translation performances of both two methods are below to the Hybrid word-character granularity in Table TABREF39 . BPE_Hybrid method (Row 2) achieves 22.12 BLEU points, which is higher than Hybrid_BPE method by 0.39 BLEU points and is near the translation quality of WPM method in Table TABREF39 ."
],
[
"The recently proposed neural machine translation has drawn more and more attention. Most of existing work in neural machine translation focus on handling rare words BIBREF16 , BIBREF10 , BIBREF17 , integrating SMT strategies BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , designing the better framework BIBREF22 , BIBREF11 , BIBREF23 and addressing the low resource scenario BIBREF24 , BIBREF25 , BIBREF26 .",
"As for strategies for dealing with rare and unknown words, a number of authors have endeavored to explore methods for addressing them. Luong et al. BIBREF11 and Li et al. BIBREF16 propose simple alignment-based technique that can replace out-of-vocabulary words with similar words. Jean et al. BIBREF27 use a large vocabulary with a method based on importance sampling.",
"In addition, another direction to achieve rare words problem in NMT is changing the granularity of segmentation. Chung et al. BIBREF8 focus on handling translation at the level of characters without any word segmentation only on target side. Luong et al. BIBREF9 propose a novel hybrid architecture that combines the strength of both word and character-based models. Sennrich et al. BIBREF10 use BPE method to encode rare and unknown words as sequences of subword units. Wu et al. BIBREF5 use both WPM method and hybrid word-character model in their online translation system. However, there is no study that shows which translation granularity is suitable for translation tasks involving Chinese language. Our goal in this work is to make an empirical comparison of different translation granularities for bidirectional Chinese-English translation tasks."
],
[
"In this work, we provide an extensive comparison for translation granularities in Chinese-English NMT, such as word, character, subword and hybrid word-character. We have also discussed the advantages and disadvantages of various translation granularities in detail. For the same granularity on both sides, the experiments demonstrate that the subword model best fits Chinese-to-English translation with the vocabulary that is not so big, while the hybrid word-character approach obtains the highest performance on English-to-Chinese translation. In addition, experiments on different granularities show that Hybrid_BPE method can acquire best result for Chinese-to-English translation task."
],
[
"The research work has been funded by the Natural Science Foundation of China under Grant No. 61333018 and No. 61402478, and it is also supported by the Strategic Priority Research Program of the CAS under Grant No. XDB02070007."
]
]
}
|
{
"question": [
"Where does the vocabulary come from?",
"What is the worst performing translation granularity?",
"What dataset did they use?"
],
"question_id": [
"da495e2f99ee2d5db9cc17eca5517ddaa5ea8e42",
"e44a5514d7464993997212341606c2c0f3a72eb4",
"310e61b9dd4d75bc1bebbcb1dae578f55807cd04"
],
"nlp_background": [
"",
"",
""
],
"topic_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"search_query": [
"",
"",
""
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"LDC corpus"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Our training data consists of 2.09M sentence pairs extracted from LDC corpus. Table 1 shows the detailed statistics of our training data. To test different approaches on Chinese-to-English translation task, we use NIST 2003(MT03) dataset as the validation set, and NIST 2004(MT04), NIST 2005(MT05), NIST 2006(MT06) datasets as our test sets. For English-to-Chinese translation task, we also use NIST 2003(MT03) dataset as the validation set, and NIST 2008(MT08) will be used as test set."
],
"highlighted_evidence": [
"Our training data consists of 2.09M sentence pairs extracted from LDC corpus."
]
}
],
"annotation_id": [
"10de01cb0a016dbba7f443855672264162d2d3f1"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"ea952900020e5644a760ca77dca5760227ba16ad"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"LDC corpus",
"NIST 2003(MT03)",
"NIST 2004(MT04), NIST 2005(MT05), NIST 2006(MT06)",
"NIST 2008(MT08)"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Our training data consists of 2.09M sentence pairs extracted from LDC corpus. Table 1 shows the detailed statistics of our training data. To test different approaches on Chinese-to-English translation task, we use NIST 2003(MT03) dataset as the validation set, and NIST 2004(MT04), NIST 2005(MT05), NIST 2006(MT06) datasets as our test sets. For English-to-Chinese translation task, we also use NIST 2003(MT03) dataset as the validation set, and NIST 2008(MT08) will be used as test set."
],
"highlighted_evidence": [
"Our training data consists of 2.09M sentence pairs extracted from LDC corpus.",
"To test different approaches on Chinese-to-English translation task, we use NIST 2003(MT03) dataset as the validation set, and NIST 2004(MT04), NIST 2005(MT05), NIST 2006(MT06) datasets as our test sets. For English-to-Chinese translation task, we also use NIST 2003(MT03) dataset as the validation set, and NIST 2008(MT08) will be used as test set."
]
}
],
"annotation_id": [
"3cbe430ce10309d266ee031fc8e4e4665a7bccfe"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
}
|
{
"caption": [
"Fig. 1. The architecture of neural machine translation model.",
"Fig. 2. An example of different translation granularities",
"Table 1. The characteristics of our training dataset on the LDC corpus.",
"Table 2. Translation results (BLEU score) of 30k vocabulary for Chinese-to-English translation.",
"Table 3. Sentence length of different translation granularities.",
"Fig. 3. Length Analysis - translation qualities(BLEU score) of different lengths.",
"Table 4. Translation results (BLEU score) of 15k vocabulary for Chinese-to-English translation.",
"Table 5. Translation results (BLEU score) of 60k vocabulary for Chinese-to-English translation.",
"Table 6. Translation results (BLEU score) of 30k vocabulary for different granularities on Chinese-to-English translation.",
"Table 7. Translation results (BLEU score) for English-to-Chinese translation."
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"6-Table1-1.png",
"8-Table2-1.png",
"9-Table3-1.png",
"9-Figure3-1.png",
"10-Table4-1.png",
"11-Table5-1.png",
"11-Table6-1.png",
"12-Table7-1.png"
]
}
|
1907.08501
|
A Comparative Evaluation of Visual and Natural Language Question Answering Over Linked Data
|
With the growing number and size of Linked Data datasets, it is crucial to make the data accessible and useful for users without knowledge of formal query languages. Two approaches towards this goal are knowledge graph visualization and natural language interfaces. Here, we investigate specifically question answering (QA) over Linked Data by comparing a diagrammatic visual approach with existing natural language-based systems. Given a QA benchmark (QALD7), we evaluate a visual method which is based on iteratively creating diagrams until the answer is found, against four QA systems that have natural language queries as input. Besides other benefits, the visual approach provides higher performance, but also requires more manual input. The results indicate that the methods can be used complementary, and that such a combination has a large positive impact on QA performance, and also facilitates additional features such as data exploration.
|
{
"section_name": [
"INTRODUCTION",
"RELATED WORK",
"SYSTEM DESCRIPTION",
"EVALUATION",
"Evaluation Setup",
"Evaluation Results and Discussion",
"CONCLUSIONS",
"ACKNOWLEDGEMENTS"
],
"paragraphs": [
[
"The Semantic Web provides a large number of structured datasets in form of Linked Data. One central obstacle is to make this data available and consumable to lay users without knowledge of formal query languages such as SPARQL. In order to satisfy specific information needs of users, a typical approach are natural language interfaces to allow question answering over the Linked Data (QALD) by translating user queries into SPARQL BIBREF0 , BIBREF1 . As an alternative method, BIBREF2 propose a visual method of QA using an iterative diagrammatic approach. The diagrammatic approach relies on the visual means only, it requires more user interaction than natural language QA, but also provides additional benefits like intuitive insights into dataset characteristics, or a broader understanding of the answer and the potential to further explore the answer context, and finally allows for knowledge sharing by storing and sharing resulting diagrams.",
"In contrast to BIBREF2 , who present the basic method and tool for diagrammatic question answering (DQA), here we evaluate DQA in comparison to natural language QALD systems. Both approaches have different characteristics, therefore we see them as complementary rather than in competition.",
"The basic research goals are: i) Given a dataset extracted from the QALD7 benchmark, we evaluate DQA versus state-of-the-art QALD systems. ii) More specifically, we investigate if and to what extent DQA can be complementary to QALD systems, especially in cases where those systems do not find a correct answer. iii) Finally, we want to present the basic outline for the integration of the two methods.",
"In a nutshell, users that applied DQA found the correct answer with an F1-score of 79.5%, compared to a maximum of 59.2% for the best performing QALD system. Furthermore, for the subset of questions where the QALD system could not provide a correct answer, users found the answer with 70% F1-score with DQA. We further analyze the characteristics of questions where the QALD or DQA, respectively, approach is better suited.",
"The results indicate, that aside from the other benefits of DQA, it can be a valuable component for integration into larger QALD systems, in cases where those systems cannot find an answer, or when the user wants to explore the answer context in detail by visualizing the relevant nodes and relations. Moreover, users can verify answers given by a QALD system using DQA in case of doubt.",
"This publication is organized as follows: After the presentation of related work in Section SECREF2 , and a brief system description of the DQA tool in Section SECREF3 , the main focus of the paper is on evaluation setup and results of the comparison of DQA and QALD, including a discussion, in Section SECREF4 . The paper concludes with Section SECREF5 ."
],
[
"As introduced in BIBREF2 we understand diagrammatic question answering (DQA) as the process of QA relying solely on visual exploration using diagrams as a representation of the underlying knowledge source. The process includes (i) a model for diagrammatic representation of semantic data which supports data interaction using embedded queries, (ii) a simple method for step-by-step construction of diagrams with respect to cognitive boundaries and a layout that boosts understandability of diagrams, (iii) a library for visual data exploration and sharing based on its internal data model, and (iv) an evaluation of DQA as knowledge understanding and knowledge sharing tool. BIBREF3 propose a framework of five perspectives of knowledge visualization, which can be used to describe certain aspects of the DQA use cases, such as its goal to provide an iterative exploration method, which is accessible to any user, the possibility of knowledge sharing (via saved diagrams), or the general purpose of knowledge understanding and abstraction from technical details.",
"Many tools exist for visual consumption and interaction with RDF knowledge bases, however, they are not designed specifically towards the question answering use case. BIBREF4 give an overview of ontology and Linked Data visualization tools, and categorize them based on the used visualization methods, interaction techniques and supported ontology constructs.",
"Regarding language-based QA over Linked Data, BIBREF5 discuss and study the usefulness of natural language interfaces to ontology-based knowledge bases in a general way. They focus on usability of such systems for the end user, and conclude that users prefer full sentences for query formulation and that natural language interfaces are indeed useful.",
" BIBREF0 describe the challenges of QA over knowledge bases using natural languages, and elaborate the various techniques used by existing QALD systems to overcome those challenges. In the present work, we compare DQA with four of those systems using a subset of questions of the QALD7 benchmark. Those systems are: gAnswer BIBREF6 is an approach for RDF QA that has a “graph-driven” perspective. In contrast to traditional approaches, which first try to understand the question, and then evaluate the query, in gAnswer the intention of the query is modeled in a structured way, which leads to a subgraph matching problem. Secondly, QAKiS BIBREF7 is QA system over structured knowledge bases such as DBpedia that makes use of relational patterns which capture different ways to express a certain relation in a natural language in order to construct a target-language (SPARQL) query. Further, Platypus BIBREF8 is a QA system on Wikidata. It represents questions in an internal format related to dependency-based compositional semantics which allows for question decomposition and language independence. The platform can answer complex questions in several languages by using hybrid grammatical and template-based techniques. And finally, also the WDAqua BIBREF0 system aims for language-independence and for being agnostic of the underlying knowledge base. WDAqua puts more importance on word semantics than on the syntax of the user query, and follows a processes of query expansion, SPARQL construction, query ranking and then making an answer decision.",
"For the evaluation of QA systems, several benchmarks have been proposed such as WebQuestions BIBREF9 or SimpleQuestions BIBREF10 . However, the most popular benchmarks in the Semantic Web field arise from the QALD evaluation campaign BIBREF1 . The recent QALD7 evaluation campaign includes task 4: “English question answering over Wikidata” which serves as basis to compile our evaluation dataset."
],
[
"The DQA functionality is part of the Ontodia tool. The initial idea of Ontodia was to enable the exploration of semantic graphs for ordinary users. Data exploration is about efficiently extracting knowledge from data even in situations where it is unclear what is being looked for exactly BIBREF11 .",
"The DQA tool uses an incremental approach to exploration typically starting from a very small number of nodes. With the context menu of a particular node, relations and related nodes can be added until the diagram fulfills the information need of the user. Figure FIGREF1 gives an example of a start node, where a user wants to learn more about the painting style of Van Gogh.",
"To illustrate the process, we give a brief example here. More details about the DQA tool, the motivation for DQA and diagram-based visualizations are found in previous work BIBREF2 , BIBREF12 .",
"As for the example, when attempting to answer a question such as “Who is the mayor of Paris?” the first step for a DQA user is finding a suitable starting point, in our case the entity Paris. The user enters “Paris” into the search box, and can then investigate the entity on the tool canvas. The information about the entity stems from the underlying dataset, for example Wikidata. The user can – in an incremental process – search in the properties of the given entity (or entities) and add relevant entities onto the canvas. In the given example, the property “head of government” connects the mayor to the city of Paris, Anne Hidalgo. The final diagram which answers the given question is presented in Figure FIGREF3 ."
],
[
"Here we present the evaluation of DQA in comparison to four QALD systems."
],
[
"As evaluation dataset, we reuse questions from the QALD7 benchmark task 4 “QA over Wikidata”. Question selection from QALD7 is based on the principles of question classification in QA BIBREF13 . Firstly, it is necessary to define question types which correspond to different scenarios of data exploration in DQA, as well as the type of expected answers and the question focus. The question focus refers to the main information in the question which help a user find the answer. We follow the model of BIBREF14 who categorize questions by their question word into WHO, WHICH, WHAT, NAME, and HOW questions. Given the question and answer type categories, we created four questionnaires with nine questions each resulting in 36 questions from the QALD dataset. The questions were picked in equal number for five basic question categories.",
"20 persons participated in the DQA evaluation – 14 male and six female from eight different countries. The majority of respondents work within academia, however seven users were employed in industry. 131 diagrams (of 140 expected) were returned by the users.",
"The same 36 questions were answered using four QALD tools: WDAqua BIBREF0 , QAKiS BIBREF7 , gAnswer BIBREF6 and Platypus BIBREF8 .",
"For the QALD tools, a human evaluator pasted the questions as is into the natural language Web interfaces, and submitted them to the systems. Typically QALD tools provide a distinct answer, which may be a simple literal, or a set of entities which represent the answer, and which can be compared to the gold standard result. However, the WDAqua system, sometimes, additionally to the direct answer to the question, provides links to documents related to the question. We always chose the answer available via direct answer.",
"To assess the correctness of the answers given both by participants in the DQA experiments, and by the QALD system, we use the classic information retrieval metrics of precision (P), recall (R), and F1. INLINEFORM0 measures the fraction of relevant (correct) answer (items) given versus all answers (answer items) given. INLINEFORM1 is the faction of correct answer (parts) given divided by all correct ones in the gold answer, and INLINEFORM2 is the harmonic mean of INLINEFORM3 and INLINEFORM4 . As an example, if the question is “Where was Albert Einstein born?” (gold answer: “Ulm”), and the system gives two answers “Ulm” and “Bern”, then INLINEFORM5 , INLINEFORM6 and INLINEFORM7 .",
"For DQA four participants answered each question, therefore we took the average INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 values over the four evaluators as the result per question. The detailed answers by the participants and available online."
],
[
"Table TABREF8 presents the overall evaluation metrics of DQA, and the four QALD tools studied. With the given dataset, WDAqua (56.1% F1) and gAnswer (59.2% F1) clearly outperform askplatyp.us (8.6% F1) and QAKiS (27.5% F1). Detailed results per question including the calculation of INLINEFORM0 , INLINEFORM1 and INLINEFORM2 scores are available online. DQA led to 79.5% F1 (80.1% precision and 78.5% recall).",
"In further evaluations, we compare DQA results to WDAqua in order to study the differences and potential complementary aspects of the approaches. We selected WDAqua as representative of QALD tools, as it provides state-of-the-art results, and is well grounded in the Semantic Web community. ",
"Comparing DQA and WDAqua, the first interesting question is: To what extend is DQA helpful on questions that could not be answered by the QALD system? For WDAqua the overall F1 score on our test dataset is INLINEFORM0 . For the subset of questions where WDAqua had no, or only a partial, answer, DQA users found the correct answer in INLINEFORM1 of cases. On the other hand, the subset of questions that DQA users (partially) failed to answer, were answered correctly by WDAqua with an F1 of INLINEFORM2 . If DQA is used as a backup method for questions not correctly answered with WDAqua, then overall F1 can be raised to INLINEFORM3 . The increase from INLINEFORM4 to INLINEFORM5 demonstrates the potential of DQA as complementary component in QALD systems.",
"As expected, questions that are difficult to answer with one approach are also harder for the other approach – as some questions in the dataset or just more complex to process and understand than others. However, almost 70% of questions not answered by WDAqua could still be answered by DQA. As examples of cases which are easier to answer for one approach than the other, a question that DQA users could answer, but where WDAqua failed is: “What is the name of the school where Obama's wife studied?”. This complex question formulation is hard to interpret correctly for a machine. In contrast to DQA, QALD systems also struggled with “Who is the son of Sonny and Cher?”. This question needs a lot of real-world knowledge to map the names Sonny and Cher to their corresponding entities. The QALD system needs to select the correct Cher entity from multiple options in Wikidata, and also to understand that “Sonny” refers to the entity Sonny Bono. The resulting answer diagram is given in Figure FIGREF17 . More simple questions, like “Who is the mayor of Paris?” were correctly answered by WDAqua, but not by all DQA users. DQA participants in this case struggled to make the leap from the noun “mayor” to the head-of-government property in Wikidata.",
"Regarding the limits of DQA, this method has difficulties when the answer can be obtained only with joins of queries, or when it is hard to find the initial starting entities related to question focus. For example, a question like “Show me the list of African birds that are extinct.” typically requires an intersection of two (large) sets of candidates entities, ie. all African birds and extinct birds. Such a task can easily be represented in a SPARQL query, but is hard to address with diagrams, because it would require placing, and interacting with, a huge amount of nodes on the exploration canvas.",
"Overall, the experiments indicate, that additionally to the use cases where QALD and DQA are useful on their own, there is a lot of potential in combining the two approaches, especially by providing a user the opportunity to explore the dataset with DQA if QALD did not find a correct answer, or when a user wants to confirm the QALD answer by checking in the underlying knowledge base. Furthermore, visually exploring the dataset provides added benefits, like understanding the dataset characteristics, sharing of resulting diagrams (if supported by the tool), and finding more information related to the original information need.",
"For the integration of QALD and DQA, we envision two scenarios. The first scenario addresses plain question answering, and here DQA can be added to a QALD system for cases where a user is not satisfied with a given answer. The QALD Web interface can for example have a Explore visually with diagrams button, which brings the user to a canvas on which the entities detected by the QALD system within the question and results (if any) are displayed on the canvas as starting nodes. The user will then explore the knowledge graph and find the answers in the same way as the participants in our experiments. The first scenario can lead to a large improvement in answer F1 (see above).",
"The second scenario of integration of QALD and DQA focuses on the exploration aspect. Even if the QALD system provides the correct answer, a user might be interested to explore the knowledge graph to validate the result and to discover more interesting information about the target entities. From an implementation and UI point of view, the same Explore visually with diagrams button and pre-population of the canvas can be used. Both scenarios also provide the additional benefits of potentially saving and sharing the created diagrams, which elaborate the relation between question and answer."
],
[
"In this work, we compare two approaches to answer questions over Linked Data datasets: a visual diagrammatic approach (DQA) which involves iterative exploration of the graph, and a natural language-based (QALD). The evaluations show, that DQA can be a helpful addition to pure QALD systems, both regarding evaluation metrics (precision, recall, and F1), and also for dataset understanding and further exploration. The contributions include: i) a comparative evaluation of four QALD tools and DQA with a dataset extracted from the QALD7 benchmark, ii) an investigation into the differences and potential complementary aspects of the two approaches, and iii) the proposition of integration scenarios for QALD and DQA.",
"In future work we plan to study the integration of DQA and QALD, especially the aspect of automatically creating an initial diagram from a user query, in order to leverage the discussed potentials. We envision an integrated tool, that uses QALD as basic method to find an answer to a question quickly, but also allows to explore the knowledge graph visually to raise answer quality and support exploration with all its discussed benefits."
],
[
"This work was supported by the Government of the Russian Federation (Grant 074-U01) through the ITMO Fellowship and Professorship Program."
]
]
}
|
{
"question": [
"How do they measure performance?",
"Do they measure the performance of a combined approach?",
"Which four QA systems do they use?",
"How many iterations of visual search are done on average until an answer is found?",
"Do they test performance of their approaches using human judgements?"
],
"question_id": [
"bdc6664cec2b94b0b3769bc70a60914795f39574",
"e40df8c685a28b98006c47808f506def68f30e26",
"9653c89a93ac5c717a0a26cf80e9aa98a5ccf910",
"b921a1771ed0ba9dbeff9da000336ecf2bb38322",
"412aff0b2113b7d61c914edf90b90f2994390088"
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity",
"infinity"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
"",
""
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"average INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 values"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"For DQA four participants answered each question, therefore we took the average INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 values over the four evaluators as the result per question. The detailed answers by the participants and available online."
],
"highlighted_evidence": [
"For DQA four participants answered each question, therefore we took the average INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 values over the four evaluators as the result per question."
]
}
],
"annotation_id": [
"112536f1599e1ce56c95a34ed0380a178c943b84"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"c58bd28ec93c12b1a7284f2cce1c2141a455b58c"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"WDAqua BIBREF0 , QAKiS BIBREF7 , gAnswer BIBREF6 and Platypus BIBREF8"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The same 36 questions were answered using four QALD tools: WDAqua BIBREF0 , QAKiS BIBREF7 , gAnswer BIBREF6 and Platypus BIBREF8 ."
],
"highlighted_evidence": [
"The same 36 questions were answered using four QALD tools: WDAqua BIBREF0 , QAKiS BIBREF7 , gAnswer BIBREF6 and Platypus BIBREF8 ."
]
}
],
"annotation_id": [
"6b7fcc35cd56421312b9f13d1fe2bf835abe365a"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"9b3774718e9daf7fee2754aafe18f5145f17fd31"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"For DQA four participants answered each question, therefore we took the average INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 values over the four evaluators as the result per question. The detailed answers by the participants and available online.",
"To assess the correctness of the answers given both by participants in the DQA experiments, and by the QALD system, we use the classic information retrieval metrics of precision (P), recall (R), and F1. INLINEFORM0 measures the fraction of relevant (correct) answer (items) given versus all answers (answer items) given. INLINEFORM1 is the faction of correct answer (parts) given divided by all correct ones in the gold answer, and INLINEFORM2 is the harmonic mean of INLINEFORM3 and INLINEFORM4 . As an example, if the question is “Where was Albert Einstein born?” (gold answer: “Ulm”), and the system gives two answers “Ulm” and “Bern”, then INLINEFORM5 , INLINEFORM6 and INLINEFORM7 ."
],
"highlighted_evidence": [
"For DQA four participants answered each question, therefore we took the average INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 values over the four evaluators as the result per question. The detailed answers by the participants and available online.",
"To assess the correctness of the answers given both by participants in the DQA experiments, and by the QALD system, we use the classic information retrieval metrics of precision (P), recall (R), and F1. INLINEFORM0 measures the fraction of relevant (correct) answer (items) given versus all answers (answer items) given."
]
}
],
"annotation_id": [
"f0e134d719b049ee7e02c8f228f20b0de622292e"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
}
|
{
"caption": [
"Figure 1: After placing the Wikidata entity Van Gogh onto the canvas, searching properties related to his “style” with Ontodia DQA tool.",
"Figure 2: Answering the question: Who is the mayor of Paris?",
"Table 1: Overall performance of DQA and the four QALD tools – measured with precision, recall and F1 score.",
"Figure 3: Answering the question: Who is the son of Sonny and Cher? with DQA."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"4-Table1-1.png",
"5-Figure3-1.png"
]
}
|
2001.02943
|
Binary and Multitask Classification Model for Dutch Anaphora Resolution: Die/Dat Prediction
|
The correct use of Dutch pronouns 'die' and 'dat' is a stumbling block for both native and non-native speakers of Dutch due to the multiplicity of syntactic functions and the dependency on the antecedent's gender and number. Drawing on previous research conducted on neural context-dependent dt-mistake correction models (Heyman et al. 2018), this study constructs the first neural network model for Dutch demonstrative and relative pronoun resolution that specifically focuses on the correction and part-of-speech prediction of these two pronouns. Two separate datasets are built with sentences obtained from, respectively, the Dutch Europarl corpus (Koehn 2015) - which contains the proceedings of the European Parliament from 1996 to the present - and the SoNaR corpus (Oostdijk et al. 2013) - which contains Dutch texts from a variety of domains such as newspapers, blogs and legal texts. Firstly, a binary classification model solely predicts the correct 'die' or 'dat'. The classifier with a bidirectional long short-term memory architecture achieves 84.56% accuracy. Secondly, a multitask classification model simultaneously predicts the correct 'die' or 'dat' and its part-of-speech tag. The model containing a combination of a sentence and context encoder with both a bidirectional long short-term memory architecture results in 88.63% accuracy for die/dat prediction and 87.73% accuracy for part-of-speech prediction. More evenly-balanced data, larger word embeddings, an extra bidirectional long short-term memory layer and integrated part-of-speech knowledge positively affects die/dat prediction performance, while a context encoder architecture raises part-of-speech prediction performance. This study shows promising results and can serve as a starting point for future research on machine learning models for Dutch anaphora resolution.
|
{
"section_name": [
"Introduction",
"Related Work",
"Dataset",
"Preprocessing",
"Binary Classification Model ::: Model Architecture",
"Binary Classification Model ::: Experimental Set-Up",
"Binary Classification Model ::: Results",
"Multitask Classification Model ::: Model Architecture",
"Multitask Classification Model ::: Experimental Set-up",
"Multitask Classification Model ::: Results",
"Discussion",
"Conclusion"
],
"paragraphs": [
[
"Following previous research on automatic detection and correction of dt-mistakes in Dutch BIBREF0, this paper investigates another stumbling block for both native and non-native speakers of Dutch: the correct use of die and dat. The multiplicity of syntactic functions and the dependency on the antecedent's gender and number make this a challenging task for both human and computer. The grammar concerning die and dat is threefold. Firstly, they can be used as dependent or independent demonstrative pronouns (aanwijzend voornaamwoord), with the first replacing the article before the noun it modifies and the latter being a noun phrase that refers to a preceding/following noun phrase or sentence. The choice between the two pronouns depends on the gender and number of the antecedent: dat refers to neuter, singular nouns and sentences, while die refers to masculine, singular nouns and plural nouns independent of their gender. Secondly, die and dat can be used as relative pronouns introducing relative clauses (betrekkelijk voornaamwoord), which provide additional information about the directly preceding antecedent it modifies. Similar rules as for demonstrative pronouns apply: masculine, singular nouns and plural nouns are followed by relative pronoun die, neuter singular nouns by dat. Lastly, dat can be used as a subordinating conjunction (onderschikkend voegwoord) introducing a subordinating clause. An brief overview of the grammar is given in Table TABREF1.",
"The aim is to develop (1) a binary classification model that automatically detects, predicts and corrects die and dat instances in texts. Furthermore, the correct die/dat instance and the syntactic function of the predicted die and dat are jointly predicted in (2) a multitask classification model. Whereas research on neural-based, machine learning approaches for Dutch demonstrative and relative pronoun resolution - especially for die and dat - is to our knowledge non-existing, this project can serve as a starting point for further research on machine learning applications concerning Dutch subordinating conjunctions, demonstrative pronouns and relative pronouns."
],
[
"The incentive for this research project is the detection and correction system for dt-mistakes in Dutch BIBREF0. For that task, a system with a context encoder - a bidirectional LSTM with attention mechanism - and verb encoder - of which the outputs are then fed to a feedforward neural network - has been developed to predict different verb suffixes. As mentioned above, this project explores the possibility of constructing a neural network system for correcting Dutch demonstrative and relative pronouns die and dat. The task is also called pronoun resolution or anaphora resolution. Anaphora resolution and pronoun prediction has been major research subjects in machine translation research. In BIBREF3, for example, the effect of multiple English coreference resolvers on the pronoun translation in English-Dutch machine translation system with deep transfer has been investigated. Niton, Morawiecki and Ogrodnizuk (2018) developed a fully connected network with three layers in combination with a sieve-based architecture for Polish coreference resolution BIBREF4. Not only in machine translation, but also in general much research has been conducted on machine learning approaches towards coreference resolution BIBREF5BIBREF6BIBREF7 and pronoun resolution BIBREF8, BIBREF9. However, little to no research has been conducted specifically on die/dat correction."
],
[
"The datasets used for training, validation and testing contain sentences extracted from the Europarl corpus BIBREF1 and SoNaR corpus BIBREF2. The Europarl corpus is an open-source parallel corpus containing proceedings of the European Parliament. The Dutch section consists of 2,333,816 sentences and 53,487,257 words. The SoNaR corpus comprises two corpora: SONAR500 and SONAR1. The SONAR500 corpus consists of more than 500 million words obtained from different domains. Examples of text types are newsletters, newspaper articles, legal texts, subtitles and blog posts. All texts except for texts from social media have been automatically tokenized, POS tagged and lemmatized. It contains significantly more data and more varied data than the Europarl corpus. Due to the high amount of data in the corpus, only three subparts are used: Wikipedia texts, reports and newspaper articles. These subparts are chosen because the number of wrongly used die and dat is expected to be low."
],
[
"The sentences in the Europarl corpus are tokenized and parsed using the Dutch version of TreeTagger BIBREF10. Only sentences which contain at least one die or dat are extracted from the corpora. Subsequently, each single occurrence of die and dat is detected and replaced by a unique token ('PREDICT'). When there are multiple occurrences in one sentence, only one occurrence is replaced at a time. Consequently, a sentence can appear multiple times in the training and test dataset with the unique token for die and dat at a different place in the sentence. Each sentence is paired with its automatically assigned ground truth label for die and dat. The Europarl dataset, on the one hand, contains 70,057 dat-labeled and 33,814 die-labeled sentences. The resulting train and test sets consist of 103,871 (Europarl) and 1,269,091 (SoNaR) sentences. The SoNaR dataset, on the other hand, has more than ten times the number of labeled sentences with 736,987 dat-labeled and 532,104 die-labeled. Considering the imbalance in both datasets, it may be argued that dat occurs more frequently than die due to its syntactic function as subordinating conjunction and not to its use as demonstrative pronoun whereas it can only refer to singular, neutral nouns. As for the multitask classification model, the POS tags for die and dat present in the SoNaR corpus are extracted and stored as ground truth labels: 407,848 subordinating conjunction, 387,292 relative pronoun and 473,951 demonstrative pronoun. From a brief qualitative assessment on the POS tags for die and dat in both corpora, the POS tags in the SoNaR corpus appear to be more reliable than the POS tags generated by TreeTagger in the Europarl corpus. Therefore, only the SoNaR dataset is used for the multitask classification. An overview of the datasets after preprocessing is given in Table TABREF2."
],
[
"For the binary classification model that predicts the correct die or dat for each sentence, a Bidirectional Long-Short Term Memory (BiLSTM) neural network is computed. Whereas the antecedent can be rather distant from the relative or demonstrative pronoun due to adjectives and sentence boundaries, an LSTM architecture is chosen over a regular Recurrent Neural Network while the latter does not cope well with learning non-trivial long-distance dependencies BIBREF11. Furthermore, a bidirectional LSTM is chosen over a single left-to-right LSTM, whereas the antecedent can be either before or after the die or dat. The architecture of the binary classification model is provided in Fig. FIGREF7. The input sentence is first sent through an embedding layer where each token is transformed to a 100-dimensional word embedding which have been initially trained on the dataset of sentences containing at least one die or dat using the Word2Vec Skip-gram model BIBREF12. The weights of the embedding layer are trainable. The word embeddings are then sent through a BiLSTM layer. The bidirectional LSTM concatenates the outputs of two LSTMs: the left-to-right $LSTM_{forward}$ computes the states $\\overrightarrow{h_1}..\\overrightarrow{h_N}$ and the right-to-left $LSTM_{backward}$ computes the states $\\overleftarrow{h_N}..\\overleftarrow{h_1}$. This means that at time $t$ for input $x$, represented by its word embedding $E(x)$, the bidirectional LSTM outputs the following:",
"The concatenated output is then sent through a maxpooling layer, linear layer and, eventually, a softmax layer to get a probability distribution over the two classes. In order to prevent the model from overfitting and co-adapting too much, dropout regularization is implemented in the embedding layer and the linear layer. In both layers, dropout is set to $p = 0.5$ which randomly zeroes out nodes in the layer using samples from a Bernoulli distribution."
],
[
"Each dataset is randomly divided into a training (70%), validation (15%) and test set (15%). The data is fed to the model in batches of 128 samples and reshuffled at every epoch. The objective function that is minimized is Binary Cross-Entropy:",
"where $y_i$ is the ground truth label (0 for dat and 1 for die) and $p(\\hat{y}_i)$ is the probability of the predicted label for all $N$ input sentences of the train set. The weights are optimized by Stochastic Gradient Descent with learning rate = 0.01 and momentum = 0.9. The data is fed to the model in 24 epochs."
],
[
"An overview of the performance results is given in Table TABREF11. We compare model performance when trained and tested on the two corpora individually and experiment with different settings of the two corpora in order to investigate the effect of dataset changes on model performance. There are three settings: full in which the datasets contain full sentences, windowed in which sentences are windowed around the unique prediction token without exceeding sentence boundaries (five tokens before and after the token, including token), and windowed no_boundaries in which the windows can exceed sentence boundaries. When limiting the input sentences to windowed sentences in the Europarl corpus(2), model performance increases significantly on all metrics, especially for die prediction performance. The difference in model performance when trained and tested on the Europarl (2) and SoNaR (3) windowed datasets is particularly noticeable in the precision, recall and F1 scores. Model performance for dat prediction is better for the Europarl dataset than for the SoNaR dataset, while model performance for die prediction is notably better for the SoNaR dataset than for the Europarl dataset. Lastly, a change in windowing seems to have a positive impact on the overall model performance: the model trained and tested on the SoNaR dataset with windows exceeding sentence boundaries (3) outperforms that on the SoNaR dataset with windows within sentence boundaries (4) on every metric."
],
[
"The second model performs two prediction tasks. The first prediction task remains the binary classification of die and dat. The second prediction task concerns the prediction of three parts-of-speech (POS) or word classes, namely subordinating conjunction, relative pronoun and demonstrative pronoun. An overview of the model architectures is given in Fig. FIGREF13. For the BiLSTM model, the first layer is the embedding layer where the weights are initialized by means of the 200-dimensional pre-trained embedding matrix. The weights are updated after every epoch. The second layer consists of two bidirectional LSTMs where the output of the first bidirectional LSTM serves as input to the second bidirectional LSTM. The layer has dropout regularization equal to 0.2. The two-layer bidirectional LSTM concatenates the outputs at time $t$ into a 64-dimensional vector and sends it through a maxpooling layer. Until this point, the two task share the same parameters. The model than splits into two separate linear layers. The left linear layer transforms the 64-dimensional vector to a two-dimensional vector on which the softmax is computed. The softmax outputs the probability distribution over the dat and die labels. The right linear layer transforms the 64-dimensional vector to a three-dimensional vector on which the softmax is computed as well. The softmax outputs the probability distribution over the subordinating conjunction, relative pronoun and demonstrative pronoun labels. The second multitask classification model takes the immediate context around the 'PREDICT' token as additional input. Both the windowed sentence and context are first transformed into their word embedding representations. They are, then, separately sent through a sentence encoder and context encoder, respectively. The sentence encoder has the same architecture as the second and third layer of the BiLSTM model, namely a two-layer bidirectional LSTM and a maxpooling layer. For the context encoder, we experiment with two different architectures: a feedforward neural network and a one-layer bidirectional LSTM with dropout = 0.2 with a maxpooling layer on top. Both sentence and context encoder output a 64-dimensional vector which are, consequently, concatenated to a 128-dimensional vector. As in the BiLSTM model, the resulting vector is sent through two separate linear layers to output probability distributions for both the die/dat and POS prediction task."
],
[
"As discussed in Section SECREF4, the POS ground truth labels in SoNaR-based datasets are more reliable than the POS labels in the Europarl-based datasets that are generated by TreeTagger. Consequently, only the SoNaR dataset is used for training and testing. The dataset is randomly divided into a training (70%), validation (15%) and test (15%) set. The data is fed into the model in batches of 516 samples and the data is reshuffled at every epoch. For die/dat prediction, the Binary Cross-Entropy loss function is minimized. The weights are optimized by Stochastic Gradient Descent with learning rate = 0.01 and momentum = 0.9. For POS prediction, Cross-Entropy is minimized:",
"where $C$ is the number of classes, in this case three, $y_{i,c}$ is the binary indicator (0 or 1) if class label $c$ is the correct predicted classification for input sentence $i$ and $p$ is the probability of sentence $i$ having class label $c$. The weights are optimized using Adam optimization with learning rate = 0.0001. The data is fed to the model in 35 epochs."
],
[
"An overview of the performance results for die/dat prediction is given in Table TABREF19. The same dataset settings as for the binary classification model are used: full in which the datasets contain full sentences, windowed in which sentences are windowed around the unique prediction token without exceeding sentence boundaries (five tokens before and after the token, including token), and windowed no_boundaries in which the windows can exceed sentence boundaries. As mentioned in section SECREF4, we only use the SoNaR dataset. The multitask classification models generally perform better with the windowed no_boundaries dataset setting. Concerning the model architectures, it can be concluded that altering the model architecture has no large impact on model performance for die/dat prediction. However, altering the model architecture from an architecture with merely a sentence encoder to an architecture with both sentence and context encoder does have a more significant positive impact on model performance for POS prediction (Table TABREF20). For that prediction task, the multitask classification model with a bidirectional LSTM context encoder trained and tested on windowed SoNaR sentences reaches best performance results on almost all evaluation metrics."
],
[
"In Section SECREF5, a first classification model based on neural networks is computed to predict die and dat labels. The binary classification model consists of an embedding layer, a bidirectional LSTM, a maxpooling layer and a linear layer. The softmax is taken over the output of the last layer and provides a probability distribution over die and dat prediction labels. The sentences receive the prediction label with the highest probability. It is trained, validated and tested four times using four different database settings. From an analysis of the performance metric results, several conclusions can be drawn. Firstly, in all cases, the model appears to predict the dat label more precisely than the die label. This may be caused by the higher number of dat than die instances in training, validation and test datasets extracted from the Europarl and SoNaR corpus. Secondly, when the dataset is more balanced, as in the SoNaR corpus, the difference in performance between die and dat labels decreases as expected. Thirdly, die/dat prediction performance increases when the window over the sentences is not limited to sentence boundaries (SoNaR windowed, no_boundaries). A probable reason for that higher performance is that the model's ability to detect antecedents in the preceeding or following sentence, while it is not able to do so when it is trained and tested on boundary-constraint windowed sentences (SoNaR windowed). Lastly, it appears that performance of the model drops significantly when the binary classification model is trained and tested on full sentences (Europarl full). In conclusion, the binary classification model performs best when it is trained on the larger, more evenly balanced SoNaR corpus that consists of windowed sentences that are not limited to sentence boundaries. A clear performance overview of the best performing binary classification and multitask classification models for die/dat prediction can be found in Table TABREF21.",
"In Section SECREF6, several multitask classification models are constructed to jointly execute two prediction tasks: die/dat prediction and POS prediction. The BiLSTM multitask classification model consists of an embedding layer, two consecutive bidirectional LSTMs and a maxpooling layer. The output of the maxpooling layer is used as input to two separate linear layers followed by a softmax layer. The two softmax layers yield a probability distribution for die/dat and POS labels. The model trained and tested on windowed SoNaR sentences that exceed sentence boundaries performs better than the model on boundary-constraint windowed sentences and full sentences. The best performing BiLSTM multitask classification model (Model 2) outperforms the best binary classification model (Model 1) on every evaluation metric for die/dat prediction. This could arguably be due to the increased batch size, the doubled embedding dimension, the extra bidirectional LSTM layer, the influence of the second prediction task and/or the split in sentence and context encoder. Firstly, the data is divided into batch sizes of 512 instead of 128. Table TABREF22 shows, however, that there is little consistent difference in performance when batch size is 512 or 128. Therefore, it can be suggested that an increased batch size has no directly positive influence on model performance. Secondly, the input data is transformed to 200-dimensional word embeddings instead of 100-dimensional word embeddings. From the results displayed in Table TABREF22, it appears that a change in word embedding dimension could be causing an slight increase in model performance. Thirdly, the multitask model contains two bidirectional LSTM layers opposed to the binary model that has only one layer. Table TABREF23 shows the influence of the number of layers on the performance of the binary classification model. When the binary classification model has an additional bidirectional LSTM layer, all the evaluation metrics rise with approximately 2%. However, when the binary classification model has three bidirectional LSTM layers, model performance drops significantly. It appears that the doubled number of layers is indeed one of the reasons why the multitask classification models perform better than the binary classification model. However, not every rise in number of layers necessarily influences a model's performance in a positive manner. Concerning the influence of the POS prediction task on die/dat prediction performance and syntactic knowledge in general, a comparison between a two-layer bidirectional LSTM binary classification model and the two-layer bidirectional LSTM multitask classification model is made and displayed in Table TABREF24. It seems that the integration of POS knowledge positively influences die/dat prediction performance, while all evaluation metrics have increased. When examining the influence of a context encoder on die/dat prediction performance of Model 3 and Model 4, the evaluation metrics of Model 2, 3 and 4 are compared. The metric scores are fairly similar which leads to the conclusion that the addition of a context encoder has little to no further influence on die/dat prediction performance. Moreover, the encoder architecture does not cause a considerable difference in die/dat prediction performance between the model with a feedforward context encoder (Model 3) and the model with a bidirectional LSTM context encoder (Model 4). It can thus be suggested that a model does not necessarily profit from a different architecture and that an extra focus on immediate context is not additionally advantageous for the die/dat prediction task.",
"Contrary to the little to no impact on die/dat prediction performance, the context encoder - especially the bidirectional LSTM context encoder - does have a direct positive impact on POS prediction performance. The difference in POS prediction performance between the three multitask prediction models can be found in Table TABREF25. The model with the bidirectional LSTM context encoder outperforms the other two multitask classification models on every evaluation metric. Considering its highest POS prediction performance and high die/dat prediction performance, it is suggested that the multitask prediction model with bidirectional LSTM context encoder (Model 4) is the overall best model."
],
[
"Deciding which pronoun to use in various contexts can be a complicated task. The correct use of die and dat as Dutch pronouns entails knowing the antecedent and - if the antecedent is a noun - its grammatical gender and number. We experimented with neural network models to examine whether die and dat instances in sentences can be computationally predicted and, if necessary, corrected. Our binary classification model reaches a promising 84.56 % accuracy. In addition, we extended that model to three multitask classification models that not only predict die and dat, but also predicts the POS (demonstrative pronoun, relative pronoun and subordinating conjunction). By increasing the word embedding dimension, doubling the number of bidirectional LSTM layers and integrating POS knowledge in the model, the multitask classification models raise die/dat prediction performance by approximately 4 %. Concerning POS prediction performance, the multitask classification model consisting of a sentence and context encoder performs best on all evaluation metrics and reaches a accuracy of 87.78 %.",
"There are ample opportunities to further analyze, enhance and/or extend the die/dat prediction model. A qualitative study of the learned model weights, for example, could provide more insight in the prediction mechanism of the models. We already obtain excellent results with a simple neural architecture comprising relatively few parameters. We believe that more complex architectures such as a transformer architecture BIBREF13 with multihead attention will improve results. It might also be interesting to look at the possibility of integrating a language model such as BERT BIBREF14 in the classification model (e.g., as pretrained embeddings). Moreover, the binary classification task could be extended to a multiclass classification task to predict not only die and dat labels, but also respectively equivalent deze and dit labels. The difference between die/dat and deze/dat, however, entails a difference in temporal and spatial information: while die/dat indicates a physically near or earlier mentioned antecedent, deze/dit implies that the antecedent is physically distant or later mentioned in the text. That difference may possibly cause a prediction model to base its predictions on other tokens in a text."
]
]
}
|
{
"question": [
"What are the sizes of both datasets?"
],
"question_id": [
"010e3793eb1342225857d3f95e147d8f8467192a"
],
"nlp_background": [
""
],
"topic_background": [
""
],
"paper_read": [
""
],
"search_query": [
""
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"The Dutch section consists of 2,333,816 sentences and 53,487,257 words.",
"The SONAR500 corpus consists of more than 500 million words obtained from different domains."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The datasets used for training, validation and testing contain sentences extracted from the Europarl corpus BIBREF1 and SoNaR corpus BIBREF2. The Europarl corpus is an open-source parallel corpus containing proceedings of the European Parliament. The Dutch section consists of 2,333,816 sentences and 53,487,257 words. The SoNaR corpus comprises two corpora: SONAR500 and SONAR1. The SONAR500 corpus consists of more than 500 million words obtained from different domains. Examples of text types are newsletters, newspaper articles, legal texts, subtitles and blog posts. All texts except for texts from social media have been automatically tokenized, POS tagged and lemmatized. It contains significantly more data and more varied data than the Europarl corpus. Due to the high amount of data in the corpus, only three subparts are used: Wikipedia texts, reports and newspaper articles. These subparts are chosen because the number of wrongly used die and dat is expected to be low."
],
"highlighted_evidence": [
"The datasets used for training, validation and testing contain sentences extracted from the Europarl corpus BIBREF1 and SoNaR corpus BIBREF2. The Europarl corpus is an open-source parallel corpus containing proceedings of the European Parliament. The Dutch section consists of 2,333,816 sentences and 53,487,257 words. The SoNaR corpus comprises two corpora: SONAR500 and SONAR1. The SONAR500 corpus consists of more than 500 million words obtained from different domains."
]
}
],
"annotation_id": [
"e6384d6727bc9ea8054643172c7cdd2424fa23e7"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
]
}
|
{
"caption": [
"Table 1: Grammar concerning die and dat",
"Table 2: Overview of datasets",
"Figure 1: Model architecture of the binary classification model",
"Table 3: Performance results of the binary classification model on the Europarl dataset containing full sentences (1), the Europarl dataset containing windowed sentences within sentence boundaries (2), the SoNaR dataset containing windowed sentences within sentence boundaries (3) and the SoNaR dataset containing windowed sentences exceeding sentence boundaries (4).",
"Figure 2: Overview of the two multitask classification model architectures",
"Table 4: Performance of the three multitask classification models for die/dat prediction",
"Table 5: Performance results of three multitask classification tasks for POS prediction: subordinating conjunction(sc), relative pronoun (rp) and demonstrative pronoun (dp)",
"Table 6: Comparison of die/dat prediction performance between best performing binary classification model (model 1, SoNaR windowed, no boundaries), multitask classification model (model 2, SoNaR windowed, no boundaries), multitask classification model with feedforward context encoder (model 3, SoNaR windowed) and multitask classification model with bidirectional LSTM context encoder (model 4, SoNaR windowed)",
"Table 7: The influence of batch size and embedding dimension on performance of the SoNaR-based, sentence-exceeding windowed trained multitask classification model (Model 2, SoNaR windowed, no boundaries)",
"Table 8: The influence of number of layers on performance of the SoNaR-based, sentence-exceeding windowed trained binary classification model (Model 1, SoNaR windowed, no boundaries)",
"Table 9: The influence of integrated POS knowledge on die/dat prediction performance. Comparison between Model 1 with an extra BiLSTM layer (No) and Model 2 (Yes), both trained and tested using SoNaR windowed, no boundaries dataset",
"Table 10: Comparison of POS prediction performance between best performing multitask classification model (model 2, SoNaR windowed, no boundaries), multitask classification model with feedforward context encoder (model 3, SoNaR windowed) and multitask classification model with bidirectional LSTM context encoder (model 4, SoNaR windowed)"
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"4-Figure1-1.png",
"5-Table3-1.png",
"6-Figure2-1.png",
"7-Table4-1.png",
"8-Table5-1.png",
"9-Table6-1.png",
"10-Table7-1.png",
"11-Table8-1.png",
"11-Table9-1.png",
"12-Table10-1.png"
]
}
|
1911.04952
|
'Warriors of the Word' -- Deciphering Lyrical Topics in Music and Their Connection to Audio Feature Dimensions Based on a Corpus of Over 100,000 Metal Songs
|
We look into the connection between the musical and lyrical content of metal music by combining automated extraction of high-level audio features and quantitative text analysis on a corpus of 124.288 song lyrics from this genre. Based on this text corpus, a topic model was first constructed using Latent Dirichlet Allocation (LDA). For a subsample of 503 songs, scores for predicting perceived musical hardness/heaviness and darkness/gloominess were extracted using audio feature models. By combining both audio feature and text analysis, we (1) offer a comprehensive overview of the lyrical topics present within the metal genre and (2) are able to establish whether or not levels of hardness and other music dimensions are associated with the occurrence of particularly harsh (and other) textual topics. Twenty typical topics were identified and projected into a topic space using multidimensional scaling (MDS). After Bonferroni correction, positive correlations were found between musical hardness and darkness and textual topics dealing with 'brutal death', 'dystopia', 'archaisms and occultism', 'religion and satanism', 'battle' and '(psychological) madness', while there is a negative associations with topics like 'personal life' and 'love and romance'.
|
{
"section_name": [
"Introduction",
"Methodology",
"Methodology ::: Text Corpus Creation and Cleaning",
"Methodology ::: Topic Modelling via Latent Dirichlet Allocation",
"Methodology ::: High-Level Audio Feature Extraction",
"Methodology ::: Investigating the Connection between Audio and Text Features",
"Results ::: Textual Topics",
"Results ::: Correlations with Musical Dimensions",
"Conclusion and Outlook"
],
"paragraphs": [
[
"As audio and text features provide complementary layers of information on songs, a combination of both data types has been shown to improve the automatic classification of high-level attributes in music such as genre, mood and emotion BIBREF0, BIBREF1, BIBREF2, BIBREF3. Multi-modal approaches interlinking these features offer insights into possible relations between lyrical and musical information (see BIBREF4, BIBREF5, BIBREF6).",
"In the case of metal music, sound dimensions like loudness, distortion and particularly hardness (or heaviness) play an essential role in defining the sound of this genre BIBREF7, BIBREF8, BIBREF9, BIBREF10. Specific subgenres – especially doom metal, gothic metal and black metal – are further associated with a sound that is often described as dark or gloomy BIBREF11, BIBREF12.",
"These characteristics are typically not limited to the acoustic and musical level. In a research strand that has so far been generally treated separately from the audio dimensions, lyrics from the metal genre have come under relatively close scrutiny (cf. BIBREF13). Topics typically ascribed to metal lyrics include sadness, death, freedom, nature, occultism or unpleasant/disgusting objects and are overall characterized as harsh, gloomy, dystopian, or satanic BIBREF14, BIBREF13, BIBREF15, BIBREF16, BIBREF17.",
"Until now, investigations on metal lyrics were limited to individual cases or relatively small corpora – with a maximum of 1,152 songs in BIBREF17. Besides this, the relation between the musical and the textual domain has not yet been explored. Therefore, we examine a large corpus of metal song lyrics, addressing the following questions:",
"Which topics are present within the corpus of metal lyrics?",
"Is there a connection between characteristic musical dimensions like hardness and darkness and certain topics occurring within the textual domain?"
],
[
"In our sequential research design, the distribution of textual topics within the corpus was analyzed using latent Dirichlet allocation (LDA). This resulted in a topic model, which was used for a probabilistic assignment of topics to each of the song documents. Additionally, for a subset of these songs, audio features were extracted using models for high-level music dimensions. The use of automatic models for the extraction of both text as well as musical features allows for scalability as it enables a large corpus to be studied without depending on the process of manual annotation for each of the songs. The resulting feature vectors were then subjected to a correlation analysis. Figure FIGREF6 outlines the sequence of the steps taken in processing the data. The individual steps are explained in the following subsections."
],
[
"For gathering the data corpus, a web crawler was programmed using the Python packages Requests and BeautifulSoup. In total, 152,916 metal music lyrics were extracted from www.darklyrics.com.",
"Using Python’s langdetect package, all non-English texts were excluded. With the help of regular expressions, the texts were scanned for tokens indicating meta-information, which is not part of the actual lyrics. To this end, a list of stopwords referring to musical instruments or the production process (e.g. ‘recorded’, ‘mixed’, ‘arrangement by’, ‘band photos’) was defined in addition to common stopwords. After these cleaning procedures, 124,288 texts remained in the subsample. For text normalization, stemming and lemmatization were applied as further preprocessing steps."
],
[
"We performed a LDA BIBREF18 on the remaining subsample to construct a probabilistic topic model. The LDA models were created by using the Python library Gensim BIBREF19. The lyrics were first converted to a bag-of-words format, and standard weighting of terms provided by the Gensim package was applied.",
"Log perplexity BIBREF20 and log UMass coherence BIBREF21 were calculated as goodness-of-fit measures evaluating topic models ranging from 10 to 100 topics. Considering these performance measures as well as qualitative interpretability of the resulting topic models, we chose a topic model including 20 topics – an approach comparable with BIBREF22. We then examined the most salient and most typical words for each topic.",
"Moreover, we used the ldavis package to analyze the structure of the resulting topic space BIBREF23. In order to do so, the Jensen-Shannon divergence between topics was calculated in a first step. In a second step, we applied multidimensional scaling (MDS) to project the inter-topic distances onto a two-dimensional plane. MDS is based on the idea of calculating dissimilarities between pairs of items of an input matrix while minimizing the strain function BIBREF24. In this case, the closer the topics are located to one another on the two-dimensional plane, the more they share salient terms and the more likely a combination of these topics appear in a song."
],
[
"The high-level audio feature models used had been constructed in previous examinations BIBREF25, BIBREF26. In those music perception studies, ratings were obtained for 212 music stimuli in an online listening experiment by 40 raters.",
"2",
"Based on this ground truth, prediction models for the automatic extraction of high-level music dimensions – including the concepts of perceived hardness/heaviness and darkness/gloominess in music – had been trained using machine learning methods. In a second step, the model obtained for hardness had been evaluated using further listening experiments on a new unseen set of audio stimuli BIBREF26. The model has been refined against this backdrop, resulting in an $R^2$ value of 0.80 for hardness/heaviness and 0.60 for darkness/gloominess using five-fold cross-validation.",
"The resulting models embedded features implemented in LibROSA BIBREF27, Essentia BIBREF28 as well as the timbral models developed as part of the AudioCommons project BIBREF29."
],
[
"Finally, we drew a random sample of 503 songs and used Spearman's $\\rho $ to identify correlations between the topics retrieved and the audio dimensions obtained by the high-level audio feature models. We opted for Spearman’s $\\rho $ since it does not assume normal distribution of the data, is less prone to outliers and zero-inflation than Pearson’s $r$. Bonferroni correction was applied in order to account for multiple-testing."
],
[
"Table TABREF10 displays the twenty resulting topics found within the text corpus using LDA. The topics are numbered in descending order according to their prevalence (weight) in the text corpus. For each topic, a qualitative interpretation is given along with the 10 most salient terms.",
"The salient terms of the first topic – and in parts also the second – appear relatively generic, as terms like e.g. ‘know’, ‘never’, and ‘time’ occur in many contexts. However, the majority of the remaining topics reveal distinct lyrical themes described as being characteristic for the metal genre. ‘Religion & satanism’ (topic #5) and descriptions of ‘brutal death’ (topic #7) can be considered as being typical for black metal and death metal respectively, whereas ‘battle’ (topic #6), ‘landscape & journey’ (topic #11), ‘struggle for freedom’ (topic #12), and ‘dystopia’ (topic #15), are associated with power metal and other metal subgenres.",
"2",
"This is highlighted in detail in Figure FIGREF11. Here, the topic distributions for two exemplary bands contained within the sample are presented. For these heat maps, data has been aggregated over individual songs showing the topic distribution at the level of albums over a band’s history. The examples chosen illustrate the dependence between textual topics and musical subgenres. For the band Manowar, which is associated with the genre of heavy metal, power metal or true metal, a prevalence of topic #6 (‘battle’) can be observed, while a distinctive prevalence of topic #7 (‘brutal death’) becomes apparent for Cannibal Corpse – a band belonging to the subgenre of death metal.",
"Within the topic configuration obtained via multidimensional scaling (see Figure FIGREF12), two latent dimensions can be identified. The first dimension (PC1) distinguishes topics with more common wordings on the right hand side from topics with less common wording on the left hand side. This also correlates with the weight of the topics within the corpus. The second dimension (PC2) is characterized by an contrast between transcendent and sinister topics dealing with occultism, metaphysics, satanism, darkness, and mourning (#9, #3, .#5, #13, and #16) at the top and comparatively shallow content dealing with personal life and Rock’n’Roll lifestyle using a rather mundane or vulgar vocabulary (#1, #8, and #19) at the bottom. This contrast can be interpreted as ‘otherworldliness / individual-transcending narratives’ vs. ‘worldliness / personal life’."
],
[
"In the final step of our analysis, we calculated the association between the twenty topics discussed above and the two high-level audio features hardness and darkness using Spearman’s $\\rho $. The results are visualized in Figure FIGREF13 and the $\\rho $ values listed in table TABREF10.",
"Significant positive associations can be observed between musical hardness and the topics ‘brutal death’, ‘dystopia’, ‘archaisms & occultism’, ‘religion & satanism’, and ‘battle’, while it is negatively linked to relatively mundane topics concerning ‘personal life’ and ‘love & romance’. The situation is similar for dark/gloomy sounding music, which in turn is specifically related to themes such as ‘dystopia’ and ‘(psychological) madness’. Overall, the strength of the associations is moderate at best, with a tendency towards higher associations for hardness than darkness. The strongest association exists between hardness and the topic ‘brutal death’ ($\\rho = 0.267$, $p < 0.01$)."
],
[
"Applying the example of metal music, our work examined the textual topics found in song lyrics and investigated the association between these topics and high-level music features. By using LDA and MDS in order to explore prevalent topics and the topic space, typical text topics identified in qualitative analyses could be confirmed and objectified based on a large text corpus. These include e.g. satanism, dystopia or disgusting objects. It was shown that musical hardness is particularly associated with harsh topics like ‘brutal death’ and ‘dystopia’, while it is negatively linked to relatively mundane topics concerning personal life and love. We expect that even stronger correlations could be found for metal-specific topics when including more genres covering a wider range of hardness/darkness values.",
"Therefore, we suggest transferring the method to a sample including multiple genres. Moreover, an integration with metadata such as genre information would allow for the testing of associations between topics, genres and high-level audio features. This could help to better understand the role of different domains in an overall perception of genre-defining attributes such as hardness."
]
]
}
|
{
"question": [
"Why are the scores for predicting perceived musical hardness and darkness extracted only for subsample of 503 songs?",
"How long is the model trained?",
"What are lyrical topics present in the metal genre?"
],
"question_id": [
"c20bb0847ced490a793657fbaf6afb5ef54dad81",
"ff8557d93704120b65d9b597a4fab40b49d24b6d",
"447eb98e602616c01187960c9c3011c62afd7c27"
],
"nlp_background": [
"zero",
"zero",
"zero"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"",
"",
""
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"answers": [
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"3aa3494e466955e4dafe6bcc39b0b8860c76fcd9"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"113b8e3d7f892dd6d1686662021e9a13987eb99f"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Table TABREF10 displays the twenty resulting topics"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Table TABREF10 displays the twenty resulting topics found within the text corpus using LDA. The topics are numbered in descending order according to their prevalence (weight) in the text corpus. For each topic, a qualitative interpretation is given along with the 10 most salient terms.",
"FLOAT SELECTED: Table 1: Overview of the resulting topics found within the corpus of metal lyrics (n = 124,288) and their correlation to the dimensions hardness and darkness obtained from the audio signal (see section 3.2)"
],
"highlighted_evidence": [
"Table TABREF10 displays the twenty resulting topics found within the text corpus using LDA.",
"FLOAT SELECTED: Table 1: Overview of the resulting topics found within the corpus of metal lyrics (n = 124,288) and their correlation to the dimensions hardness and darkness obtained from the audio signal (see section 3.2)"
]
}
],
"annotation_id": [
"1904a06a5673a96187e2255ef65b9b6d7970150f"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
}
|
{
"caption": [
"Figure 1: Processing steps of the approach illustrating the parallel analysis of text and audio features",
"Table 1: Overview of the resulting topics found within the corpus of metal lyrics (n = 124,288) and their correlation to the dimensions hardness and darkness obtained from the audio signal (see section 3.2)",
"Figure 2: Comparison of the topic distributions for all included albums by the bands Manowar and Cannibal Corpse showing a prevalence of the topics ‘battle’ and ‘brutal death’ respectively",
"Figure 3: Topic configuration obtained via multidimensional scaling. The radius of the circles is proportional to the percentage of tokens covered by the topics (topic weight).",
"Figure 4: Correlations between lyrical topics and the musical dimensions hardness and darkness; ∗: p < 0.05, ∗∗: p < 0.00125 (Bonferroni-corrected significance level)"
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"4-Figure2-1.png",
"4-Figure3-1.png",
"5-Figure4-1.png"
]
}
|
1910.00825
|
Abstractive Dialog Summarization with Semantic Scaffolds
|
The demand for abstractive dialog summary is growing in real-world applications. For example, customer service center or hospitals would like to summarize customer service interaction and doctor-patient interaction. However, few researchers explored abstractive summarization on dialogs due to the lack of suitable datasets. We propose an abstractive dialog summarization dataset based on MultiWOZ. If we directly apply previous state-of-the-art document summarization methods on dialogs, there are two significant drawbacks: the informative entities such as restaurant names are difficult to preserve, and the contents from different dialog domains are sometimes mismatched. To address these two drawbacks, we propose Scaffold Pointer Network (SPNet)to utilize the existing annotation on speaker role, semantic slot and dialog domain. SPNet incorporates these semantic scaffolds for dialog summarization. Since ROUGE cannot capture the two drawbacks mentioned, we also propose a new evaluation metric that considers critical informative entities in the text. On MultiWOZ, our proposed SPNet outperforms state-of-the-art abstractive summarization methods on all the automatic and human evaluation metrics.
|
{
"section_name": [
"Introduction",
"Related Work",
"Proposed Method",
"Proposed Method ::: Background",
"Proposed Method ::: Scaffold Pointer Network (SPNet)",
"Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Speaker Role Scaffold",
"Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Semantic Slot Scaffold",
"Proposed Method ::: Scaffold Pointer Network (SPNet) ::: Dialog Domain Scaffold",
"Experimental Settings ::: Dataset",
"Experimental Settings ::: Evaluation Metrics",
"Experimental Settings ::: Implementation Details",
"Results and Discussions ::: Automatic Evaluation Results",
"Results and Discussions ::: Human Evaluation Results",
"Results and Discussions ::: Case study",
"Conclusion and Future Work"
],
"paragraphs": [
[
"Summarization aims to condense a piece of text to a shorter version, retaining the critical information. On dialogs, summarization has various promising applications in the real world. For instance, the automatic doctor-patient interaction summary can save doctors' massive amount of time used for filling medical records. There is also a general demand for summarizing meetings in order to track project progress in the industry. Generally, multi-party conversations with interactive communication are more difficult to summarize than single-speaker documents. Hence, dialog summarization will be a potential field in summarization track.",
"There are two types of summarization: extractive and abstractive. Extractive summarization selects sentences or phrases directly from the source text and merges them to a summary, while abstractive summarization attempts to generate novel expressions to condense information. Previous dialog summarization research mostly study extractive summarization BIBREF1, BIBREF2. Extractive methods merge selected important utterances from a dialog to form summary. Because dialogs are highly dependant on their histories, it is difficult to produce coherent discourses with a set of non-consecutive conversation turns. Therefore, extractive summarization is not the best approach to summarize dialogs. However, most modern abstractive methods focus on single-speaker documents rather than dialogs due to the lack of dialog summarization corpora. Popular abstractive summarization dataset like CNN/Daily Mail BIBREF3 is on news documents. AMI meeting corpus BIBREF4 is the common benchmark, but it only has extractive summary.",
"In this work, we introduce a dataset for abstractive dialog summarization based on MultiWOZ BIBREF0. Seq2Seq models such as Pointer-Generator BIBREF5 have achieved high-quality summaries of news document. However, directly applying a news summarizer to dialog results in two drawbacks: informative entities such as place name are difficult to capture precisely and contents in different domains are summarized unequally. To address these problems, we propose Scaffold Pointer Network (SPNet). SPNet incorporates three types of semantic scaffolds in dialog: speaker role, semantic slot, and dialog domain. Firstly, SPNet adapts separate encoder to attentional Seq2Seq framework, producing distinct semantic representations for different speaker roles. Then, our method inputs delexicalized utterances for producing delexicalized summary, and fills in slot values to generate complete summary. Finally, we incorporate dialog domain scaffold by jointly optimizing dialog domain classification task along with the summarization task. We evaluate SPNet with both automatic and human evaluation metrics on MultiWOZ. SPNet outperforms Pointer-Generator BIBREF5 and Transformer BIBREF6 on all the metrics."
],
[
"BIBREF7 first applied modern neural models to abstractive summarization. Their approach is based on Seq2Seq framework BIBREF8 and attention mechanism BIBREF9, achieving state-of-the-art results on Gigaword and DUC-2004 dataset. BIBREF10 proposed copy mechanism in summarization, demonstrating its effectiveness by combining the advantages of extractive and abstractive approach. BIBREF5 applied pointing BIBREF11 as copy mechanism and use coverage mechanism BIBREF12 to discourage repetition. Most recently, reinforcement learning (RL) has been employed in abstractive summarization. RL-based approaches directly optimize the objectives of summarization BIBREF13, BIBREF14. However, deep reinforcement learning approaches are difficult to train and more prone to exposure bias BIBREF15.",
"Recently, pre-training methods are popular in NLP applications. BERT BIBREF16 and GPT BIBREF17 have achieved state-of-the-art performance in many tasks, including summarization. For instance, BIBREF18 proposed a method to pre-train hierarchical document encoder for extractive summarization. BIBREF19 proposed two strategies to incorporate a pre-trained model (GPT) to perform the abstractive summarizer and achieved a better performance. However, there has not been much research on adapting pre-trained models to dialog summarization.",
"Dialog summarization, specifically meeting summarization, has been studied extensively. Previous work generally focused on statistical machine learning methods in extractive dialog summarization: BIBREF20 used skip-chain conditional random fields (CRFs) BIBREF21 as a ranking method in extractive meeting summarization. BIBREF22 compared support vector machines (SVMs) BIBREF23 with LDA-based topic models BIBREF24 for producing decision summaries. However, abstractive dialog summarization was less explored due to the lack of a suitable benchmark. Recent work BIBREF25, BIBREF26, BIBREF27 created abstractive dialog summary benchmarks with existing dialog corpus. BIBREF26 annotated topic descriptions in AMI meeting corpus as the summary. However, topics they defined are coarse, such as “industrial designer presentation\". They also proposed a model with a sentence-gated mechanism incorporating dialog acts to perform abstractive summarization. Moreover, BIBREF28 first built a model to summarize audio-visual meeting data with an abstractive method. However, previous work has not investigated the utilization of semantic patterns in dialog, so we explore it in-depth in our work."
],
[
"As discussed above, state-of-the-art document summarizers are not applicable in conversation settings. We propose Scaffold Pointer Network (SPNet) based on Pointer-Generator BIBREF5. SPNet incorporates three types of semantic scaffolds to improve abstractive dialog summarization: speaker role, semantic slot and dialog domain."
],
[
"We first introduce Pointer-Generator BIBREF5. It is a hybrid model of the typical Seq2Seq attention model BIBREF29 and pointer network BIBREF11. Seq2Seq framework encodes source sequence and generates the target sequence with the decoder. The input sequence is fed into the encoder token by token, producing the encoder hidden states $h_i$ in each encoding step. The decoder receives word embedding of the previous word and generates a distribution to decide the target element in this step, retaining decoder hidden states $s_t$. In Pointer-Generator, attention distribution $a^t$ is computed as in BIBREF9:",
"where $W_h$, $W_s$, $v$ and $b_{attn}$ are all learnable parameters.",
"With the attention distribution $a^t$, context vector $h_t^*$ is computed as the weighted sum of encoder's hidden states. Context vector is regarded as the attentional information in the source text:",
"Pointer-Generator differs from typical Seq2Seq attention model in the generation process. The pointing mechanism combines copying words directly from the source text with generating words from a fixed vocabulary. Generation probability $p_{gen}$ is calculated as “a soft switch\" to choose from copy and generation:",
"where $x_t$ is the decoder input, $w_{h^*}$, $w_s$, $w_x$ and $b_{ptr}$ are all learnable parameters. $\\sigma $ is sigmoid function, so the generation probability $p_{gen}$ has a range of $[0, 1]$.",
"The ability to select from copy and generation corresponds to a dynamic vocabulary. Pointer network forms an extended vocabulary for the copied tokens, including all the out-of-vocabulary(OOV) words appeared in the source text. The final probability distribution $P(w)$ on extended vocabulary is computed as follows:",
"where $P_{vocab}$ is the distribution on the original vocabulary, $V^{\\prime }$, $V$, $b$ and $b^{\\prime }$ are learnable parameters used to calculate such distribution."
],
[
"Our Scaffold Pointer Network (depicted in Figure FIGREF7) is based on Pointer-Generator BIBREF5. The contribution of SPNet is three-fold: separate encoding for different roles, incorporating semantic slot scaffold and dialog domain scaffold."
],
[
"Our encoder-decoder framework employs separate encoding for different speakers in the dialog. User utterances $x_t^{usr}$ and system utterances $x_t^{sys}$ are fed into a user encoder and a system encoder separately to obtain encoder hidden states $h_{i}^{usr}$ and $h_{i}^{sys}$ . The attention distributions and context vectors are calculated as described in section SECREF1. In order to merge these two encoders in our framework, the decoder's hidden state $s_0$ is initialized as:",
"The pointing mechanism in our model follows the Equation DISPLAY_FORM4, and we obtain the context vector $h_t^{*}$:"
],
[
"We integrate semantic slot scaffold by performing delexicalization on original dialogs. Delexicalization is a common pre-processing step in dialog modeling. Specifically, delexicalization replaces the slot values with its semantic slot name(e.g. replace 18:00 with [time]). It is easier for the language modeling to process delexicalized texts, as they have a reduced vocabulary size. But these generated sentences lack the semantic information due to the delexicalization. Some previous dialog system research ignored this issue BIBREF30 or completed single delexicalized utterance BIBREF31 as generated response. We propose to perform delexicalization in dialog summary, since delexicalized utterances can simplify dialog modeling. We fill the generated templates with slots with the copy and pointing mechanism.",
"We first train the model with the delexicalized utterance. Attention distribution $a^t$ over the source tokens instructs the decoder to fill up the slots with lexicalized values:",
"Note that $w_{slot}$ specifies the tokens that represents the slot name (e.g. [hotel_place], [time]). Decoder directly copies lexicalized value $value(w_i)$ conditioned on attention distribution $a_i^t$. If $w$ is not a slot token, then the probability $P(w)$ is calculated as Equation DISPLAY_FORM5."
],
[
"We integrate dialog domain scaffold through a multi-task framework. Dialog domain indicates different conversation task content, for example, booking hotel, restaurant and taxi in MultiWOZ dataset. Generally, the content in different domains varies so multi-domain task summarization is more difficult than single-domain. We include domain classification as the auxiliary task to incorporate the prior that different domains have different content. Feedback from the domain classification task provides domain specific information for the encoder to learn better representations. For domain classification, we feed the concatenated encoder hidden state through a binary classifier with two linear layers, producing domain probability $d$. The $i^{th}$ element $d_i$ in $d$ represents the probability of the $i^{th}$ domain:",
"where $U$, $U^{\\prime }$, $b_{d}$ and $b_{d}^{\\prime }$ are all trainable parameters in the classifier. We denote the loss function of summarization as $loss_1$ and domain classification as $loss_2$. Assume target word at timestep $t$ is $w_t^{*}$, $loss_1$ is the arithmetic mean of the negative log likelihood of $w_t^{*}$ over the generated sequence:",
"The domain classification task is a multi-label binary classification problem. We use binary cross entropy loss between the $i^{th}$ domain label $\\hat{d_i}$ and predict probability $d_i$ for this task:",
"where $|D|$ is the number of domains. Finally, we reweight the classification loss with hyperparameter $\\lambda $ and the objective function is:"
],
[
"We validate SPNet on MultiWOZ-2.0 dataset BIBREF0. MultiWOZ consists of multi-domain conversations between a tourist and a information center clerk on varies booking tasks or domains, such as booking restaurants, hotels, taxis, etc. There are 10,438 dialogs, spanning over seven domains. 3,406 of them are single-domain (8.93 turns on average) and 7,302 are multi-domain (15.39 turns on average). During MultiWOZ data collection, instruction is provided for crowd workers to perform the task. We use the instructions as the dialog summary, and an example data is shown in Table TABREF25. Dialog domain label is extracted from existing MultiWOZ annotation. In the experiment, we split the dataset into 8,438 training, 1,000 validation, and 1,000 testing."
],
[
"ROUGE BIBREF32 is a standard metric for summarization, designed to measure the surface word alignment between a generated summary and a human written summary. We evaluate our model with ROUGE-1, ROUGE-2 and ROUGE-L. They measure the word-overlap, bigram-overlap, and longest common sequence between the reference summary and the generated summary respectively. We obtain ROUGE scores using the files2rouge package. However, ROUGE is insufficient to measure summarization performance. The following example shows its limitations:",
"Reference: You are going to [restaurant_name] at [time].",
"Summary: You are going to [restaurant_name] at.",
"In this case, the summary has a high ROUGE score, as it has a considerable proportion of word overlap with the reference summary. However, it still has poor relevance and readability, for leaving out one of the most critical information: [time]. ROUGE treats each word equally in computing n-gram overlap while the informativeness actually varies: common words or phrases (e.g. “You are going to\") significantly contribute to the ROUGE score and readability, but they are almost irrelevant to essential contents. The semantic slot values (e.g. [restaurant_name], [time]) are more essential compared to other words in the summary. However, ROUGE did not take this into consideration. To address this drawback in ROUGE, we propose a new evaluation metric: Critical Information Completeness (CIC). Formally, CIC is a recall of semantic slot information between a candidate summary and a reference summary. CIC is defined as follows:",
"where $V$ stands for a set of delexicalized values in the reference summary, $Count_{match}(v)$ is the number of values co-occurring in the candidate summary and reference summary, and $m$ is the number of values in set $V$. In our experiments, CIC is computed as the arithmetic mean over all the dialog domains to retain the overall performance.",
"CIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities. For example, in news summarization the proper nouns are the critical information to retain."
],
[
"We implemented our baselines with OpenNMT framework BIBREF33. We delexicalize utterances according to the belief span annotation. To maintain the generalizability of SPNet, we combine the slots that refer to the same information from different dialog domains into one slot (e.g. time). Instead of using pre-trained word embeddings like GloVe BIBREF34, we train word embeddings from scratch with a 128-dimension embedding layer. We set the hidden states of the bidirectional LSTM encoders to 256 dimensions, and the unidirectional LSTM decoder to 512 dimension. Our model is optimized using Adam BIBREF35 with a learning rate of 0.001, $\\beta _1=0.9$, $\\beta _2=0.999$. We reduce the learning rate to half to avoid overfitting when the validation loss increases. We set the hyperparameter $\\lambda $ to 0.5 in the objective function and the batch size to eight. We use beam search with a beam size of three during decoding. We use the validation set to select the model parameter. Our model with and without multi-task takes about 15 epochs and seven epochs to converge, respectively."
],
[
"To demonstrate SPNet's effectiveness, we compare it with two state-of-the-art methods, Pointer-Generator BIBREF5 and Transformer BIBREF6. Pointer-Generator is the state-of-the-art method in abstractive document summarization. In inference, we use length penalty and coverage penalty mentioned in BIBREF36. The hyperparameters in the original implementation BIBREF5 were used. Transformer uses attention mechanisms to replace recurrence for sequence transduction. Transformer generalizes well to many sequence-to-sequence problems, so we adapt it to our task, following the implementation in the official OpenNMT-py documentation.",
"We show all the models' results in Table TABREF24. We observe that SPNet reaches the highest score in both ROUGE and CIC. Both Pointer-Generator and Transformer achieve high ROUGE scores, but a relative low CIC scores. It suggests that the baselines have more room for improvement on preserving critical slot information. All the scaffolds we propose can be applied to different neural network models. In this work we select Pointer-Generator as our base model in SPNet because we observe that Transformer only has a small improvement over Pointer-Generator but is having a higher cost on training time and computing resources. We observe that SPNet outperforms other methods in all the automatic evaluation metrics with a big margin, as it incorporates all the three semantic scaffolds. Semantic slot contributes the most to SPNet's increased performance, bringing the largest increase on all automatic evaluation metrics."
],
[
"We also perform human evaluation to verify if our method's increased performance on automatic evaluation metrics entails better human perceived quality. We randomly select 100 test samples from MultiWOZ test set for evaluation. We recruit 150 crowd workers from Amazon Mechanical Turk. For each sample, we show the conversation, reference summary, as well as summaries generated by Pointer-Generator and SPNet to three different participants. The participants are asked to score each summary on three indicators: relevance, conciseness and readability on a 1 to 5 scale, and rank the summary pair (tie allowed).",
"We present human evaluation results in Table TABREF27. In the scoring part, our model outperforms Pointer-Generator in all three evaluation metrics. SPNet scored better than Pointer-Generator on relevance and readability. All generated summaries are relatively concise; therefore, they score very similar in conciseness. Ground truth is still perceived as more relevant and readable than SPNet results. However, ground truth does not get a high absolute score. From the feedback of the evaluators, we found that they think that the ground truth has not covered all the necessary information in the conversation, and the description is not so natural. This motivates us to collect a dialog summarization dataset with high-quality human-written summaries in the future. Results in the ranking evaluation show more differences between different summaries. SPNet outperforms Pointer-Generator with a large margin. Its performance is relatively close to the ground truth summary."
],
[
"Table TABREF25 shows an example summary from all models along with ground truth summary. We observe that Pointer-Generator ignores some essential fragments, such as the restaurant booking information (6 people, Sunday, 18:45). Missing information always belongs to the last several domains (restaurant in this case) in a multi-domain dialog. We also observe that separately encoding two speakers reduces repetition and inconsistency. For instance, Pointer-Generator's summary mentions “free wifi\" several times and has conflicting requirements on wifi. This is because dialogs has information redundancy, but single-speaker model ignores such dialog property.",
"Our method has limitations. In the example shown in Table TABREF25, our summary does not mention the hotel name (Alexander Bed and Breakfast) and its address (517a Coldham Lane) referred in the source. It occurs because the ground truth summary doe not cover it in the training data. As a supervised method, SPNet is hard to generate a summary containing additional information beyond the ground truth. However, in some cases, SPNet can also correctly summarize the content not covered in the reference summary (see Table TABREF31 in Appendix).",
"Furthermore, although our SPNet achieves a much-improved performance, the application of SPNet still needs extra annotations for semantic scaffolds. For a dialog dataset, speaker role scaffold is a natural pattern for modeling. Most multi-domain dialog corpus has the domain annotation. While for texts, for example news, its topic categorization such as sports or entertainment can be used as domain annotation. We find that semantic slot scaffold brings the most significant improvement, but it is seldom explicitly annotated. However, the semantic slot scaffold can be relaxed to any critical entities in the corpus, such as team name in sports news or professional terminology in a technical meeting."
],
[
"We adapt a dialog generation dataset, MultiWOZ to an abstractive dialog summarization dataset. We propose SPNet, an end-to-end model that incorporates the speaker role, semantic slot and dialog domain as the semantic scaffolds to improve abstractive summary quality. We also propose an automatic evaluation metric CIC that considers semantic slot relevance to serve as a complementary metric to ROUGE. SPNet outperforms baseline methods in both automatic and human evaluation metrics. It suggests that involving semantic scaffolds efficiently improves abstractive summarization quality in the dialog scene.",
"Moreover, we can easily extend SPNet to other summarization tasks. We plan to apply semantic slot scaffold to news summarization. Specifically, we can annotate the critical entities such as person names or location names to ensure that they are captured correctly in the generated summary. We also plan to collect a human-human dialog dataset with more diverse human-written summaries."
]
]
}
|
{
"question": [
"By how much does SPNet outperforms state-of-the-art abstractive summarization methods on evaluation metrics?",
"What automatic and human evaluation metrics are used to compare SPNet to its counterparts?",
"Is proposed abstractive dialog summarization dataset open source?",
"Is it expected to have speaker role, semantic slot and dialog domain annotations in real world datasets?",
"How does SPNet utilize additional speaker role, semantic slot and dialog domain annotations?",
"What are previous state-of-the-art document summarization methods used?",
"How does new evaluation metric considers critical informative entities?",
"Is new evaluation metric extension of ROGUE?"
],
"question_id": [
"f398587b9a0008628278a5ea858e01d3f5559f65",
"d5f8707ddc21741d52b3c2a9ab1af2871dc6c90b",
"58f3bfbd01ba9768172be45a819faaa0de2ddfa4",
"73633afbefa191b36cca594977204c6511f9dad4",
"db39a71080e323ba2ddf958f93778e2b875dcd24",
"6da2cb3187d3f28b75ac0a61f6562a8adf716109",
"c47e87efab11f661993a14cf2d7506be641375e4",
"14684ad200915ff1e3fc2a89cb614e472a1a2854"
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero",
"zero",
"zero",
"zero",
"zero"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
"",
"",
"",
"",
""
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "SPNet vs best baseline:\nROUGE-1: 90.97 vs 90.68\nCIC: 70.45 vs 70.25",
"evidence": [
"We show all the models' results in Table TABREF24. We observe that SPNet reaches the highest score in both ROUGE and CIC. Both Pointer-Generator and Transformer achieve high ROUGE scores, but a relative low CIC scores. It suggests that the baselines have more room for improvement on preserving critical slot information. All the scaffolds we propose can be applied to different neural network models. In this work we select Pointer-Generator as our base model in SPNet because we observe that Transformer only has a small improvement over Pointer-Generator but is having a higher cost on training time and computing resources. We observe that SPNet outperforms other methods in all the automatic evaluation metrics with a big margin, as it incorporates all the three semantic scaffolds. Semantic slot contributes the most to SPNet's increased performance, bringing the largest increase on all automatic evaluation metrics.",
"FLOAT SELECTED: Table 1: Automatic evaluation results on MultiWOZ. We use Pointer-Generator as the base model and gradually add different semantic scaffolds."
],
"highlighted_evidence": [
"We show all the models' results in Table TABREF24",
"FLOAT SELECTED: Table 1: Automatic evaluation results on MultiWOZ. We use Pointer-Generator as the base model and gradually add different semantic scaffolds."
]
}
],
"annotation_id": [
"d214c4bc382c51d8f0cd08b640a46c76afbbbd86"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"ROUGE and CIC",
"relevance, conciseness and readability on a 1 to 5 scale, and rank the summary pair"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We also perform human evaluation to verify if our method's increased performance on automatic evaluation metrics entails better human perceived quality. We randomly select 100 test samples from MultiWOZ test set for evaluation. We recruit 150 crowd workers from Amazon Mechanical Turk. For each sample, we show the conversation, reference summary, as well as summaries generated by Pointer-Generator and SPNet to three different participants. The participants are asked to score each summary on three indicators: relevance, conciseness and readability on a 1 to 5 scale, and rank the summary pair (tie allowed).",
"We show all the models' results in Table TABREF24. We observe that SPNet reaches the highest score in both ROUGE and CIC. Both Pointer-Generator and Transformer achieve high ROUGE scores, but a relative low CIC scores. It suggests that the baselines have more room for improvement on preserving critical slot information. All the scaffolds we propose can be applied to different neural network models. In this work we select Pointer-Generator as our base model in SPNet because we observe that Transformer only has a small improvement over Pointer-Generator but is having a higher cost on training time and computing resources. We observe that SPNet outperforms other methods in all the automatic evaluation metrics with a big margin, as it incorporates all the three semantic scaffolds. Semantic slot contributes the most to SPNet's increased performance, bringing the largest increase on all automatic evaluation metrics."
],
"highlighted_evidence": [
"The participants are asked to score each summary on three indicators: relevance, conciseness and readability on a 1 to 5 scale, and rank the summary pair (tie allowed).",
"We observe that SPNet reaches the highest score in both ROUGE and CIC"
]
}
],
"annotation_id": [
"87489cb800ee2bd74ed869331e049f50df8490cd"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"dd2e932f857b22b80622c71fdff3724951a7b2ef"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Not at the moment, but summaries can be additionaly extended with this annotations.",
"evidence": [
"Moreover, we can easily extend SPNet to other summarization tasks. We plan to apply semantic slot scaffold to news summarization. Specifically, we can annotate the critical entities such as person names or location names to ensure that they are captured correctly in the generated summary. We also plan to collect a human-human dialog dataset with more diverse human-written summaries."
],
"highlighted_evidence": [
"We plan to apply semantic slot scaffold to news summarization. Specifically, we can annotate the critical entities such as person names or location names to ensure that they are captured correctly in the generated summary. We also plan to collect a human-human dialog dataset with more diverse human-written summaries."
]
}
],
"annotation_id": [
"658e80b812db9c136734de7fac04f01050ba7696"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Our encoder-decoder framework employs separate encoding for different speakers in the dialog.",
"We integrate semantic slot scaffold by performing delexicalization on original dialogs.",
"We integrate dialog domain scaffold through a multi-task framework."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Our encoder-decoder framework employs separate encoding for different speakers in the dialog. User utterances $x_t^{usr}$ and system utterances $x_t^{sys}$ are fed into a user encoder and a system encoder separately to obtain encoder hidden states $h_{i}^{usr}$ and $h_{i}^{sys}$ . The attention distributions and context vectors are calculated as described in section SECREF1. In order to merge these two encoders in our framework, the decoder's hidden state $s_0$ is initialized as:",
"We integrate semantic slot scaffold by performing delexicalization on original dialogs. Delexicalization is a common pre-processing step in dialog modeling. Specifically, delexicalization replaces the slot values with its semantic slot name(e.g. replace 18:00 with [time]). It is easier for the language modeling to process delexicalized texts, as they have a reduced vocabulary size. But these generated sentences lack the semantic information due to the delexicalization. Some previous dialog system research ignored this issue BIBREF30 or completed single delexicalized utterance BIBREF31 as generated response. We propose to perform delexicalization in dialog summary, since delexicalized utterances can simplify dialog modeling. We fill the generated templates with slots with the copy and pointing mechanism.",
"We integrate dialog domain scaffold through a multi-task framework. Dialog domain indicates different conversation task content, for example, booking hotel, restaurant and taxi in MultiWOZ dataset. Generally, the content in different domains varies so multi-domain task summarization is more difficult than single-domain. We include domain classification as the auxiliary task to incorporate the prior that different domains have different content. Feedback from the domain classification task provides domain specific information for the encoder to learn better representations. For domain classification, we feed the concatenated encoder hidden state through a binary classifier with two linear layers, producing domain probability $d$. The $i^{th}$ element $d_i$ in $d$ represents the probability of the $i^{th}$ domain:"
],
"highlighted_evidence": [
"Our encoder-decoder framework employs separate encoding for different speakers in the dialog. User utterances $x_t^{usr}$ and system utterances $x_t^{sys}$ are fed into a user encoder and a system encoder separately to obtain encoder hidden states $h_{i}^{usr}$ and $h_{i}^{sys}$ .",
"We integrate semantic slot scaffold by performing delexicalization on original dialogs. Delexicalization is a common pre-processing step in dialog modeling.",
"We integrate dialog domain scaffold through a multi-task framework. Dialog domain indicates different conversation task content, for example, booking hotel, restaurant and taxi in MultiWOZ dataset."
]
}
],
"annotation_id": [
"8c16d083a2893633aec9f3bcfddc03ede96237de"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Pointer-Generator",
"Transformer"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"To demonstrate SPNet's effectiveness, we compare it with two state-of-the-art methods, Pointer-Generator BIBREF5 and Transformer BIBREF6. Pointer-Generator is the state-of-the-art method in abstractive document summarization. In inference, we use length penalty and coverage penalty mentioned in BIBREF36. The hyperparameters in the original implementation BIBREF5 were used. Transformer uses attention mechanisms to replace recurrence for sequence transduction. Transformer generalizes well to many sequence-to-sequence problems, so we adapt it to our task, following the implementation in the official OpenNMT-py documentation."
],
"highlighted_evidence": [
"To demonstrate SPNet's effectiveness, we compare it with two state-of-the-art methods, Pointer-Generator BIBREF5 and Transformer BIBREF6."
]
}
],
"annotation_id": [
"5274d125124da018bd4cea634e16b14af46f9fe4"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Answer with content missing: (formula for CIC) it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities",
"evidence": [
"In this case, the summary has a high ROUGE score, as it has a considerable proportion of word overlap with the reference summary. However, it still has poor relevance and readability, for leaving out one of the most critical information: [time]. ROUGE treats each word equally in computing n-gram overlap while the informativeness actually varies: common words or phrases (e.g. “You are going to\") significantly contribute to the ROUGE score and readability, but they are almost irrelevant to essential contents. The semantic slot values (e.g. [restaurant_name], [time]) are more essential compared to other words in the summary. However, ROUGE did not take this into consideration. To address this drawback in ROUGE, we propose a new evaluation metric: Critical Information Completeness (CIC). Formally, CIC is a recall of semantic slot information between a candidate summary and a reference summary. CIC is defined as follows:",
"where $V$ stands for a set of delexicalized values in the reference summary, $Count_{match}(v)$ is the number of values co-occurring in the candidate summary and reference summary, and $m$ is the number of values in set $V$. In our experiments, CIC is computed as the arithmetic mean over all the dialog domains to retain the overall performance.",
"CIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities. For example, in news summarization the proper nouns are the critical information to retain."
],
"highlighted_evidence": [
"To address this drawback in ROUGE, we propose a new evaluation metric: Critical Information Completeness (CIC). Formally, CIC is a recall of semantic slot information between a candidate summary and a reference summary. CIC is defined as follows:\n\nwhere $V$ stands for a set of delexicalized values in the reference summary, $Count_{match}(v)$ is the number of values co-occurring in the candidate summary and reference summary, and $m$ is the number of values in set $V$. In our experiments, CIC is computed as the arithmetic mean over all the dialog domains to retain the overall performance.\n\nCIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities."
]
}
],
"annotation_id": [
"1162bf54756068e0894e0ec3e15af76802321f63"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [
"CIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities. For example, in news summarization the proper nouns are the critical information to retain."
],
"highlighted_evidence": [
"CIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities."
]
}
],
"annotation_id": [
"5fa3ee21cd7d33a6a7d8bad663cc0b8a8cc5bab4"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
}
|
{
"caption": [
"Figure 1: SPNet overview. The blue and yellow box is the user and system encoder respectively. The encoders take the delexicalized conversation as input. The slots values are aligned with their slots position. Pointing mechanism merges attention distribution and vocabulary distribution to obtain the final distribution. We then fill the slots values into the slot tokens to convert the template to a complete summary. SPNet also performs domain classification to improve encoder representation.",
"Table 1: Automatic evaluation results on MultiWOZ. We use Pointer-Generator as the base model and gradually add different semantic scaffolds.",
"Table 2: An example dialog and Pointer-Generator, SPNet and ground truth summaries. We underline semantic slots in the conversation. Red denotes incorrect slot values and green denotes the correct ones.",
"Table 3: The upper is the scoring part and the lower is the the ranking part. SPNet outperforms Pointer-Generator in all three human evaluation metrics and the differences are significant, with the confidence over 99.5% in student t test. In the ranking part, the percentage of each choice is shown in decimal. Win, lose and tie refer to the state of the former summary in ranking."
],
"file": [
"4-Figure1-1.png",
"6-Table1-1.png",
"7-Table2-1.png",
"8-Table3-1.png"
]
}
|
1911.03705
|
CommonGen: A Constrained Text Generation Dataset Towards Generative Commonsense Reasoning
|
Rational humans can generate sentences that cover a certain set of concepts while describing natural and common scenes. For example, given {apple(noun), tree(noun), pick(verb)}, humans can easily come up with scenes like "a boy is picking an apple from a tree" via their generative commonsense reasoning ability. However, we find this capacity has not been well learned by machines. Most prior works in machine commonsense focus on discriminative reasoning tasks with a multi-choice question answering setting. Herein, we present CommonGen: a challenging dataset for testing generative commonsense reasoning with a constrained text generation task. We collect 37k concept-sets as inputs and 90k human-written sentences as associated outputs. Additionally, we also provide high-quality rationales behind the reasoning process for the development and test sets from the human annotators. We demonstrate the difficulty of the task by examining a wide range of sequence generation methods with both automatic metrics and human evaluation. The state-of-the-art pre-trained generation model, UniLM, is still far from human performance in this task. Our data and code is publicly available at this http URL .
|
{
"section_name": [
"Introduction",
"Problem Formulation",
"The CommonGen Dataset",
"The CommonGen Dataset ::: Collecting Concept-Sets with Captions",
"The CommonGen Dataset ::: Crowd-Sourcing via AMT",
"The CommonGen Dataset ::: Statistics",
"Methods",
"Methods ::: Seq-to-Seq Learning",
"Methods ::: A BERT-based Method: UniLM",
"Methods ::: Other methods",
"Methods ::: Incorporating Commonsense Rationales",
"Evaluation",
"Evaluation ::: Setup",
"Evaluation ::: Automatic Metrics",
"Evaluation ::: Experimental Results",
"Evaluation ::: Human Evaluation",
"Evaluation ::: Qualitative Analysis",
"Related Work ::: Machine Common Sense",
"Related Work ::: Constrained Text Generation",
"Conclusion"
],
"paragraphs": [
[
"Commonsense reasoning has long been acknowledged as a critical bottleneck of artificial intelligence and especially in natural language processing. It is an ability of combining commonsense facts and logic rules to make new presumptions about ordinary scenes in our daily life. A distinct property of commonsense reasoning problems is that they are generally trivial for human-beings while challenging for machine reasoners.",
"There have been a few recent tasks and datasets for testing machine commonsense, while most of them frame their problems as multi-choice question answering, such as CSQA BIBREF0 and SWAG BIBREF1. We name this kind of tasks as deterministic commonsense reasoning because they focus on modeling the plausibility of given complete scenes. The systems for these tasks thus have to work with biased selection of distractors, and thus are less practical or challenging. Simply fine-tuning such large pre-trained language encoders can yield near or exceeding human performance BIBREF2. On the other hand, few work has been done so far in testing machine commonsense in a generative reasoning setting, where a reasoner is expected to complete scenes with several given concepts.",
"Specifically, we would like to investigate if machine-reasoning models can generate a sentence that contains a required set of concepts (i.e. nouns or verbs) while describing a common scene in our daily life. For example, as shown in Figure FIGREF1, given an unordered collection of concepts “{apple (noun), bag (noun), pick (verb), place (verb), tree (noun)}”, a rational reasoner should be able to generate a sentence like “A boy picks some apples from a tree and places them into a bag.”, which describes a natural scene and contains all given concepts. The creation of this sentence is easy for humans while non-trivial for even state-of-the-art conditional language generation models. We argue that such an ability of recovering natural scenes of daily life can benefit a wide range of natural language generation (NLG) tasks including image/video captioning BIBREF3, BIBREF4, scene-based visual reasoning and VQA BIBREF5, storytelling BIBREF6, and dialogue systems BIBREF7, BIBREF8.",
"Towards empowering machines with the generative commonsense reasoning ability, we create a large-scale dataset, named CommonGen, for the constrained text generation task. We collect $37,263$ concept-sets as the inputs, each of which contains three to five common concepts. These concept-sets are sampled from several large corpora of image/video captions, such that the concepts inside them are more likely to co-occur in natural scenes. Through crowd-sourcing via Amazon Mechanical Turk (AMT), we finally obtain $89,028$ human-written sentences as expected outputs. We investigate the performance of sophisticated sequence generation methods for the proposed task with both automatic metrics and human evaluation. The experiments show that all methods are far from human performance in generative commonsense reasoning. Our main contributions are as follows: 1) We introduce the first large-scale constrained text generation dataset targeting at generative commonsense reasoning; 2) We systematically compare methods for this (lexically) constrained text generation with extensive experiments and evaluation. 3) Our code and data are publicly available (w/ the URL in the abstract), so future research in this direction can be directly developed in a unified framework."
],
[
"In this section, we formulate our task with mathematical notations and discuss its inherent challenges. The input to the task is a set of $n$ concepts $x=\\lbrace c_1,c_2,\\dots ,c_n\\rbrace \\in \\mathcal {X}$, where $c_i\\in \\mathcal {C}$ is a common noun or verb. $\\mathcal {X}$ denotes the space of concept-sets and $\\mathcal {C}$ stands for the concept vocabulary. The expected output of this task is a simple, grammatical sentence $y\\in \\mathcal {Y}$, describing a natural scene in our daily-life that covers all given concepts in $x$. Note that other forms of given concepts are also accepted, such as plural forms of nouns and verbs. In addition, we also provide rationales as an optional resource to model the generation process. For each pair of $(x, y)$, a rationale $r$ is a list of sentences that explains the background commonsense knowledge used in the scene recovering process.",
"The task is to learn a structured predictive function $f:\\mathcal {X} \\rightarrow \\mathcal {Y}$, which maps a concept-set to a sentence. Thus, it can be seen as a special case of constrained text generation BIBREF9. The unique challenges of our proposed task come from two main aspects as follows.",
"Constrained Decoding. Lexically constrained decoding for sentence generation has been an important and challenging research topic in machine translation community BIBREF10, where they focus on how to decode sentences when some words/phrases (e.g. terminology) must present in target sentences (Section SECREF6). However, it is still an open problem how to efficiently generate sentences given an unordered set of multiple keywords with potential morphological changes (e.g. “pick” $\\rightarrow $ “picks” in the previous case). Apart from that, the part-of-speech constraints brings even more difficulties (e.g. “place” can be verb/noun).",
"Commonsense Reasoning. Apart from the challenge in constrained decoding, a generative commonsense reasoner also has to compositionally use (latent) commonsense knowledge for generating most plausible scenes. Recall the illustrative example in Figure FIGREF1, even such a simple scene generation process needs pretty much commonsense knowledge like: 1) “apples grow in trees”; 2) “bags are containers that you can put something in”; 3) “you usually pick something and then place it in a container”. Expected reasoners have to prioritize target scenes over an infinity number of less plausible scenes like “A boy picks an apple tree and places it into bags.” or “A boy places some bags on a tree and picks an apple.”."
],
[
"In this section, we present how we build the CommonGen dataset for testing machine commonsense with generative reasoning. The overall data collection process is as follows. 1) We first collect a large amount of high-quality image/video caption sentences from several existing corpora, 2) Then, we compute co-occurrence statistics about concept-sets of different sizes ($3\\sim 5$), such that we can find the concept-sets that are more likely to be present in the same scene. 3) Finally, we ask human crowd-workers from AMT to write scenes with rationales for every given concept-set, which serve as our development and test sets. The training set consists of carefully post-processed human-written caption sentences, which have little overlap with dev/test sets. We present the statistics and show its inherent challenges at the end of this section."
],
[
"Following the general definition in the largest commonsense knowledge graph, ConceptNet BIBREF11, we understand a concept as a common noun or verb. We aim to test the ability of generating natural scenes with a given set of concepts. The expected concept-sets in our task are supposed to be likely co-occur in natural, daily-life scenes . The concepts in images/videos captions, which usually describe scenes in our daily life, thus possess the desired property. We therefore collect a large amount of caption sentences from a variety of datasets, including VATEX BIBREF4, LSMDC BIBREF12, ActivityNet BIBREF13, and SNLI BIBREF15, forming 1,040,330 sentences in total.",
"We assume if a set of concepts are all mentioned together in more caption sentences, then this concept-set is more like to co-occur. Thus, we compute the co-occurrence frequency of all possible concept-sets that have $3\\sim 5$ concepts, named as three/four/five-concept-sets respectively. Each concept-set is associated with at least one caption sentences. We carefully post-process them and take the shortest ones with minimal overlaps as the final data. These initial concept-sets are further divided into three parts: train/dev/test. We then iterate all training concept-sets and remove the ones that have more than two overlapping concepts with any concept-set in the dev or test set. Thus, the dev/test set can better measure the generalization ability of models on unseen combinations of concepts."
],
[
"It is true that the above-mentioned associated caption sentences for each concept-set are human-written and do describe scenes that cover all given concepts. However, they are created under specific contexts (i.e. an image or a video) and thus might be less representative for common sense. To better measure the quality and interpretability of generative reasoners, we need to evaluate them with scenes and rationales created by using concept-sets only as the signals for annotators.",
"We collect more human-written scenes for each concept-set in dev and test set through crowd-sourcing via the Amazon Mechanical Turk platform. Each input concept-set is annotated by at least three different humans. The annotators are also required to give sentences as the rationales, which further encourage them to use common sense in creating their scenes. The crowd-sourced sentences correlate well with the associated captions, meaning that it is reasonable to use caption sentences as training data although they can be partly noisy. Additionally, we utilize a search engine over the OMCS corpus BIBREF16 for retrieving relevant propositions as distant rationales in training data."
],
[
"We present the statistical information of our final dataset. Firstly, we summarize the basic statistics in Table TABREF9, such as the number of unique concept-sets, scene sentences, and sentence lengths. In total, there are 3,706 unique concepts among all concept-sets, and 3,614/1,018/1,207 in the train/dev/test parts respectively. Note that there are 4% of the dev and 6% of the test concepts never appear in the training data, so we can better understand how well trained models can perform with unseen concepts.",
"We analyze the overlap between training concept-sets and dev/test concept-sets. By average, we find that 98.8% of the training instances share no common concept at all with dev/test data, such that the dev/test can help us analyze model performance on new combinations of concepts.",
"We also visualize the frequency distribution of our test concept-sets in Figure FIGREF7 by showing the frequency of top 50 single concepts and co-occurred concept pairs."
],
[
"In this section, we introduce the methods that we adopt for the proposed constrained text generation task. We group these methods into several types as follows. Basically, we have different kinds of encoder-decoder architectures with copy attention mechanism, including both classic and recently proposed methods. Apart from that, we utilize the state-of-the-art pre-trained sentence generation model for our task. Moreover, we include three typical models for abstractive summarization, story generation respectively, and keywords-based decoding of language models."
],
[
"One very straightforward way is to form this problem as a “sequence”-to-sequence task, where input sequences are randomly sorted sets of given concepts. In this way, encoder-decoder seq2seq architectures based on bidirectional RNN (bRNN) BIBREF17 or Transformer (Trans.) BIBREF18 can be directly adopted to the task, just like many other conditional sequence generation problems (translation, summarization, etc.).",
"Order-insensitive processing. However, these encoders may degrade because our inputs are actually order-insensitive. We thus try to use multi-layer perceptrons (MLP) with mean-pooling as the encoder (“mean encoder”) over sequences of word vectors to completely eliminate the order sensitivity. Similarly, we consider removing the positional embeddings in Transformers (Trans. w/o Pos).",
"Copying mechanism. The above-mentioned architectures with vanilla attention can miss the words in input sequences and thus produce either unknown tokens or synonyms. To force the decoder to produce target sentences with a constraint on input sentence, we utilize the copying mechanism BIBREF19 for all these models. We follow the implementation of these methods by OpenNMT-py BIBREF20.",
"Non-autoregressive generation. Recent advances in conditional sentence generation have a focus on edit-based models, which iteratively refine generated sequences (usually bounded by a fixed length). These models potentially get better performance than auto-regressive methods because of their explicit modeling on iterative refinements. We study typical models including iNAT BIBREF21, Insertion Transformer (InsertTrans) BIBREF22, and Levenshtein Transformer (LevenTrans) BIBREF23."
],
[
"We employ a new unified pre-trained language model, UniLM BIBREF24, which uses BERT BIBREF25 as the encoder and then fine-tunes the whole architecture with different generation-based objective. To the best of our knowledge, the UniLM model is the state-of-the-art method for a wide range of conditional text generation tasks including summarization, question generation, and dialogue responding."
],
[
"Based on the similarity between our task and abstractive summarization and story generation (with given topic words), we also apply Pointer Generator Networks (“PointerGen”) BIBREF26 and Multi-scale Fusion Attention (“Fusion Attn.”) BIBREF27 model respectively for our task."
],
[
"We explore how to utilize additional commonsense knowledge (i.e. rationales) as the input to the task. Like we mentioned in Section SECREF6, we search relevant sentences from the OMCS corpus as the additional distant rationales, and ground truth rationale sentences for dev/test data. The inputs are no longer the concept-sets themselves, but in a form of “[rationales$|$concept-set]” (i.e. concatenating the rationale sentences and original concept-set strings)."
],
[
"Herein, we present the experimental results for comparing different baseline methods in the proposed setting. We first introduce the setup and automatic metrics, and then we present the results and analysis. Finally, we show human evaluation results and qualitative analysis."
],
[
"We use the proposed CommonGen dataset in two setting: knowledge-agnostic and knowledge-aware. For the knowledge-agnostic setting, we simply apply the methods in Section SECREF4 while we concatenate rationales and input concept-sets together as the knowledge-aware inputs (“$+r$”)."
],
[
"For automatically evaluating our methods, we propose to use widely used metric for image/video captioning. This is because the proposed CommonGen task can be regarded as also a caption task where the context are incomplete scenes with given concept-sets. Therefore, we choose BLEU-3/4 BIBREF28, ROUGE-2/L BIBREF29, CIDEr BIBREF30, and SPICE BIBREF31 as the main metrics. Apart from these classic metrics, we also include a novel embedding-based metric named BERTScore BIBREF32. To make the comparisons more clear, we show the delta of BERTScore results by subtracting the score of merely using input concept-sets as target sentences, named $\\triangle $BERTS.",
"To have an estimation about human performance in each metric, we iteratively treat every reference sentence in dev/test data as the prediction to be compared with all references (including itself). That is, if a model has the same reasoning ability with average performance of our crowd workers, its results should exceed this “human bound”."
],
[
"We present the experimental results of five groups of methods that are introduced in Section SECREF4. We find that the model UniLM outperforms all other baseline methods by a large margin, which is expected due to it is pre-trained with the BERT encoder towards generation objectives. However, its performance is still way far from the human bound, and this margin is even larger in test data.",
"We notice that the most recent edit-based model named LevenTrans archives the best performance among models without pre-training at all. This shows that edit-based sequence generation models can better deal with the cases where target sentences share similar vocabulary with source ones. Nonetheless, the other two models within the same sequence modeling framework (i.e. fairseq) are much worse, which might because of their specialty designed for machine translation.",
"An order-insensitive sequence/set encoder, “mean encoder”, outperform order-sensitive counterparts like “bRNN”. However, such a marginal improvement is not seen in the comparison between “Trans.” vs “Trans. w/o Pos”. We assume that for short sequences the order sensitivity does not harm sequential encoders, while positional embeddings in Transformers can better improve the self-attention mechanism. Also, we find that Transformer-based seq2seq architectures are not outperforming simpler models like bRNN.",
"As for the use of additional retrieved sentences form OMCS corpus and human-written associated rationales, we find that they are not generally helpful in investigated architectures. Although they increase the BLEU and ROUGE scores, the metrics specially designed for captioning like CIDEr and SPICE are dropping down. We argue that it might because the OMCS sentences are actually not aligned with training data, and more sophisticated methods for encoding such non-sequential facts in a more compositional way."
],
[
"From the automatic evaluation results with multiple metrics, we have a rough idea of the performance of all models. However, no automatic metric is perfect, especially for a newly proposed generation task like the CommonGen. We thus ask humans to rank 100 outputs of 6 selected typical models as well as one randomly picked reference sentence, forming seven systems in total. Annotators are educated to rank results by their coverage, fluency, and plausibility in daily life. Then, we compute the cumulative gains of each system in all 100 cases:",
"$S^{(k)}_i$ is the final score of the $i$-th system by the $k$-th annotator. $G^{k}_{i, j}$ is the rank position of the $i$-th system output for $j$-th example. In our case, $N=100$, $K = 5$, $G^{k}_{i, j}\\in [1,7]$.",
"As shown in Table TABREF22, we compare different systems including human bound for both the above-introduced cumulative ranking scores and the average hit@top3 rates with standard deviations. We find that the correlation between human evaluation and CIDEr and SPICE are better than the other metrics (see Table TABREF15)."
],
[
"For more clearly observe the performance of interested models, we present several real system outputs on the test set in Table TABREF24. We find that models usually cannot cover all given concepts, and also can produce repetitions of given concepts (e.g. “a dog catches a dog”, “a couple of couples”, and “at an object and an object .”). Moreover, we find that the order of actions may be mot natural. For example, the model output “a man pulls a sword out of his mouth and swallows it” makes less sense because a man usually swallow a sword first before he pull it out in such performances."
],
[
"Machine common sense (MCS) has long been considered as one of the most significant area in artificial intelligence. Recently, there are various emerging datasets for testing machine commonsense from different angles, such as commonsense extraction BIBREF33, BIBREF34, next situation prediction (SWAG BIBREF1, CODAH BIBREF35, HellaSWAG BIBREF36), cultural/social understanding BIBREF37, BIBREF38, BIBREF39, visual scene comprehension BIBREF40, and general commonsense question answering BIBREF0, BIBREF41. Most of them are in a multi-choice QA setting for discriminative commonsense reasoning, among which CSQA BIBREF0 and SWAG BIBREF1 are two typical examples. The input of the CSQA task is a question that needs commonsense reasoning and there are five candidate answers (words/phrases). The SWAG task asks models to select which situation is the most plausible next situation, given a sentence describing an event.",
"The two tasks share very similar objectives with large pre-trained language encoders like BERT BIBREF42: Masked-LM can predict the missing words in an incomplete sentence, which is similar to the CSQA setting; NextSentPrediction classifies whether a sentence is the next sentence of the given sentence in the corpora, which can be seen as using distant supervision for the SWAG task. Thus, simply fine-tuning such large pre-trained language encoders can yield near or exceeding human performance BIBREF43, BIBREF2, but it does not necessarily mean machine reasoners can really produce new assumptions in an open and generative setting. The proposed CommonGen, to the best of our knowledge, is the first dataset and task for generative commonsense reasoning."
],
[
"Constrained or controllable text generation aims to decode realistic sentences that have expected attributes such as sentiment BIBREF44, BIBREF9, tense BIBREF9, template BIBREF45, style BIBREF46, BIBREF47, BIBREF48, etc. The most similar scenario with our task is lexically constrained sentence encoding, which has been studied mainly in the machine translation community BIBREF49, BIBREF50 for dealing with terminology and additional bilingual dictionaries.",
"Classic methods usually modify the (beam) searching algorithms to accommodate lexical constraints like Grid Beam Search BIBREF10. The most recent work in this line is the CGMH BIBREF51 model, which works in the inference stage to sample sentences with a sequence of multiple keywords from language models. However, our task brings more challenges: 1) we do not assume there is a fixed order of keywords in target sentences; 2) we allow morphological changes of the keywords; 3) the decoded sentences must describe highly plausible scenes in our daily life. Current methods cannot well address these issues and also work extremely slow to generate grammatical sentences. We instead mainly investigate sequence-to-sequence architectures, especially models that are based on editing operations and non-autoregressive. Pre-trained seq2seq generation models like UniLM BIBREF24 and BRAT BIBREF52 are usually initialized with pre-trained language encoder and then further fine-tuned with multiple NLG tasks. The UniLM archives the best performance on our proposed CommonGen task, while being far from human-level performance and hardly interpretable."
],
[
"In this paper, we purpose a novel constrained text generation task for generative commonsense reasoning. We introduce a new large-scale dataset named CommonGen and investigate various methods on them. Through our extensive experiments and human evaluation, we demonstrate that the inherent difficulties of the new task cannot be addressed by even the state-of-the-art pre-trained language generation model.",
"For the future research, we believe the following directions are highly valuable to explore: 1) specially designed metrics for automatic evaluation that focus on commonsense plausibility; 2) better mechanisms for retrieving and imposing useful commonsense knowledge into sentence generation processes; 3) explicitly modeling keyword-centric edits (e.g. insertion, deletion, morphological changes) such that relevant commonsense knowledge can be well utilized. We also believe that models performed well on CommonGen can be easily transferred to other commonsense-required reasoning tasks with few annotations, including image/video captioning, visual question answering, and discriminative multi-choice commonsense question answering."
]
]
}
|
{
"question": [
"What measures were used for human evaluation?",
"What automatic metrics are used for this task?",
"Are the models required to also generate rationales?",
"Are the rationales generated after the sentences were written?",
"Are the sentences in the dataset written by humans who were shown the concept-sets?",
"Where do the concept sets come from?"
],
"question_id": [
"8d1f9d3aa2cc2e2e58d3da0f5edfc3047978f3ee",
"5065ff56d3c295b8165cb20d8bcfcf3babe9b1b8",
"c34a15f1d113083da431e4157aceb11266e9a1b2",
"061682beb3dbd7c76cfa26f7ae650e548503d977",
"3518d8eb84f6228407cfabaf509fd63d60351203",
"617c77a600be5529b3391ab0c21504cd288cc7c7"
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity",
"infinity",
"infinity"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"search_query": [
"text reasoning",
"text reasoning",
"text reasoning",
"text reasoning",
"text reasoning",
"text reasoning"
],
"question_writer": [
"ecca0cede84b7af8a918852311d36346b07f0668",
"ecca0cede84b7af8a918852311d36346b07f0668",
"ecca0cede84b7af8a918852311d36346b07f0668",
"ecca0cede84b7af8a918852311d36346b07f0668",
"ecca0cede84b7af8a918852311d36346b07f0668",
"ecca0cede84b7af8a918852311d36346b07f0668"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"To have an estimation about human performance in each metric, we iteratively treat every reference sentence in dev/test data as the prediction to be compared with all references (including itself)."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"To have an estimation about human performance in each metric, we iteratively treat every reference sentence in dev/test data as the prediction to be compared with all references (including itself). That is, if a model has the same reasoning ability with average performance of our crowd workers, its results should exceed this “human bound”."
],
"highlighted_evidence": [
"To have an estimation about human performance in each metric, we iteratively treat every reference sentence in dev/test data as the prediction to be compared with all references (including itself). That is, if a model has the same reasoning ability with average performance of our crowd workers, its results should exceed this “human bound”."
]
}
],
"annotation_id": [
"11abf8a9688d03f0f9020b5fc7ce0e9a41c3642c"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"BLEU-3/4",
"ROUGE-2/L",
"CIDEr",
"SPICE",
"BERTScore"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"For automatically evaluating our methods, we propose to use widely used metric for image/video captioning. This is because the proposed CommonGen task can be regarded as also a caption task where the context are incomplete scenes with given concept-sets. Therefore, we choose BLEU-3/4 BIBREF28, ROUGE-2/L BIBREF29, CIDEr BIBREF30, and SPICE BIBREF31 as the main metrics. Apart from these classic metrics, we also include a novel embedding-based metric named BERTScore BIBREF32. To make the comparisons more clear, we show the delta of BERTScore results by subtracting the score of merely using input concept-sets as target sentences, named $\\triangle $BERTS."
],
"highlighted_evidence": [
"For automatically evaluating our methods, we propose to use widely used metric for image/video captioning. This is because the proposed CommonGen task can be regarded as also a caption task where the context are incomplete scenes with given concept-sets. Therefore, we choose BLEU-3/4 BIBREF28, ROUGE-2/L BIBREF29, CIDEr BIBREF30, and SPICE BIBREF31 as the main metrics. Apart from these classic metrics, we also include a novel embedding-based metric named BERTScore BIBREF32. To make the comparisons more clear, we show the delta of BERTScore results by subtracting the score of merely using input concept-sets as target sentences, named $\\triangle $BERTS."
]
}
],
"annotation_id": [
"a5f6994dbe5280e6fca93898d7c658b1cce3de1e"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [
"We explore how to utilize additional commonsense knowledge (i.e. rationales) as the input to the task. Like we mentioned in Section SECREF6, we search relevant sentences from the OMCS corpus as the additional distant rationales, and ground truth rationale sentences for dev/test data. The inputs are no longer the concept-sets themselves, but in a form of “[rationales$|$concept-set]” (i.e. concatenating the rationale sentences and original concept-set strings)."
],
"highlighted_evidence": [
"We explore how to utilize additional commonsense knowledge (i.e. rationales) as the input to the task. Like we mentioned in Section SECREF6, we search relevant sentences from the OMCS corpus as the additional distant rationales, and ground truth rationale sentences for dev/test data. The inputs are no longer the concept-sets themselves, but in a form of “[rationales$|$concept-set]” (i.e. concatenating the rationale sentences and original concept-set strings)."
]
}
],
"annotation_id": [
"86bf1f40d410a67ebd40a89af9672808fa26cf2e"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"We collect more human-written scenes for each concept-set in dev and test set through crowd-sourcing via the Amazon Mechanical Turk platform. Each input concept-set is annotated by at least three different humans. The annotators are also required to give sentences as the rationales, which further encourage them to use common sense in creating their scenes. The crowd-sourced sentences correlate well with the associated captions, meaning that it is reasonable to use caption sentences as training data although they can be partly noisy. Additionally, we utilize a search engine over the OMCS corpus BIBREF16 for retrieving relevant propositions as distant rationales in training data."
],
"highlighted_evidence": [
"We collect more human-written scenes for each concept-set in dev and test set through crowd-sourcing via the Amazon Mechanical Turk platform. Each input concept-set is annotated by at least three different humans. The annotators are also required to give sentences as the rationales, which further encourage them to use common sense in creating their scenes."
]
}
],
"annotation_id": [
"1742bd7774eeaffd07421b5965a23ddbefd41634"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"It is true that the above-mentioned associated caption sentences for each concept-set are human-written and do describe scenes that cover all given concepts. However, they are created under specific contexts (i.e. an image or a video) and thus might be less representative for common sense. To better measure the quality and interpretability of generative reasoners, we need to evaluate them with scenes and rationales created by using concept-sets only as the signals for annotators.",
"We collect more human-written scenes for each concept-set in dev and test set through crowd-sourcing via the Amazon Mechanical Turk platform. Each input concept-set is annotated by at least three different humans. The annotators are also required to give sentences as the rationales, which further encourage them to use common sense in creating their scenes. The crowd-sourced sentences correlate well with the associated captions, meaning that it is reasonable to use caption sentences as training data although they can be partly noisy. Additionally, we utilize a search engine over the OMCS corpus BIBREF16 for retrieving relevant propositions as distant rationales in training data."
],
"highlighted_evidence": [
"It is true that the above-mentioned associated caption sentences for each concept-set are human-written and do describe scenes that cover all given concepts. However, they are created under specific contexts (i.e. an image or a video) and thus might be less representative for common sense. To better measure the quality and interpretability of generative reasoners, we need to evaluate them with scenes and rationales created by using concept-sets only as the signals for annotators.\n\nWe collect more human-written scenes for each concept-set in dev and test set through crowd-sourcing via the Amazon Mechanical Turk platform. Each input concept-set is annotated by at least three different humans. The annotators are also required to give sentences as the rationales, which further encourage them to use common sense in creating their scenes."
]
}
],
"annotation_id": [
"f8fc7762ca9f7dab8c3a935c4846286e40a4cecc"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"These concept-sets are sampled from several large corpora of image/video captions"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Towards empowering machines with the generative commonsense reasoning ability, we create a large-scale dataset, named CommonGen, for the constrained text generation task. We collect $37,263$ concept-sets as the inputs, each of which contains three to five common concepts. These concept-sets are sampled from several large corpora of image/video captions, such that the concepts inside them are more likely to co-occur in natural scenes. Through crowd-sourcing via Amazon Mechanical Turk (AMT), we finally obtain $89,028$ human-written sentences as expected outputs. We investigate the performance of sophisticated sequence generation methods for the proposed task with both automatic metrics and human evaluation. The experiments show that all methods are far from human performance in generative commonsense reasoning. Our main contributions are as follows: 1) We introduce the first large-scale constrained text generation dataset targeting at generative commonsense reasoning; 2) We systematically compare methods for this (lexically) constrained text generation with extensive experiments and evaluation. 3) Our code and data are publicly available (w/ the URL in the abstract), so future research in this direction can be directly developed in a unified framework.",
"Following the general definition in the largest commonsense knowledge graph, ConceptNet BIBREF11, we understand a concept as a common noun or verb. We aim to test the ability of generating natural scenes with a given set of concepts. The expected concept-sets in our task are supposed to be likely co-occur in natural, daily-life scenes . The concepts in images/videos captions, which usually describe scenes in our daily life, thus possess the desired property. We therefore collect a large amount of caption sentences from a variety of datasets, including VATEX BIBREF4, LSMDC BIBREF12, ActivityNet BIBREF13, and SNLI BIBREF15, forming 1,040,330 sentences in total."
],
"highlighted_evidence": [
"We collect $37,263$ concept-sets as the inputs, each of which contains three to five common concepts. These concept-sets are sampled from several large corpora of image/video captions, such that the concepts inside them are more likely to co-occur in natural scenes.",
"The expected concept-sets in our task are supposed to be likely co-occur in natural, daily-life scenes . The concepts in images/videos captions, which usually describe scenes in our daily life, thus possess the desired property. We therefore collect a large amount of caption sentences from a variety of datasets, including VATEX BIBREF4, LSMDC BIBREF12, ActivityNet BIBREF13, and SNLI BIBREF15, forming 1,040,330 sentences in total."
]
}
],
"annotation_id": [
"5feee9f32509dbe70aec97b4eba68600c4ea973f"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
}
|
{
"caption": [
"Figure 1: A motivating example for generative commonsense reasoning and the COMMONGEN task. A reasoner gets a concept-set as the input and should generate a sentence that covers all given concepts while describing a common scene (in the green box) out of less plausible ones (in the red box).",
"Figure 2: The frequency of top 50 single concepts (upper) and co-occurred concept-pairs (lower) in the test data.",
"Table 1: The basic statistics of COMMONGEN.",
"Table 2: Experimental results of different baseline methods on the COMMONGEN.",
"Table 3: The average humane evaluation ranking scores and hit@top3 rates for each tested system."
],
"file": [
"1-Figure1-1.png",
"4-Figure2-1.png",
"4-Table1-1.png",
"6-Table2-1.png",
"6-Table3-1.png"
]
}
|
1910.00458
|
MMM: Multi-stage Multi-task Learning for Multi-choice Reading Comprehension
|
Machine Reading Comprehension (MRC) for question answering (QA), which aims to answer a question given the relevant context passages, is an important way to test the ability of intelligence systems to understand human language. Multiple-Choice QA (MCQA) is one of the most difficult tasks in MRC because it often requires more advanced reading comprehension skills such as logical reasoning, summarization, and arithmetic operations, compared to the extractive counterpart where answers are usually spans of text within given passages. Moreover, most existing MCQA datasets are small in size, making the learning task even harder. We introduce MMM, a Multi-stage Multi-task learning framework for Multi-choice reading comprehension. Our method involves two sequential stages: coarse-tuning stage using out-of-domain datasets and multi-task learning stage using a larger in-domain dataset to help model generalize better with limited data. Furthermore, we propose a novel multi-step attention network (MAN) as the top-level classifier for this task. We demonstrate MMM significantly advances the state-of-the-art on four representative MCQA datasets.
|
{
"section_name": [
"Introduction",
"Methods",
"Methods ::: Model Architecture",
"Methods ::: Multi-step Attention Network",
"Methods ::: Two Stage Training",
"Methods ::: Two Stage Training ::: Coarse-tuning Stage",
"Methods ::: Two Stage Training ::: Multi-task Learning Stage",
"Experimental Setup ::: Datasets",
"Experimental Setup ::: Speaker Normalization",
"Experimental Setup ::: Multi-task Learning",
"Experimental Setup ::: Training Details",
"Results",
"Discussion ::: Why does natural language inference help?",
"Discussion ::: Can other tasks help with MCQA?",
"Discussion ::: NLI dataset helps with convergence",
"Discussion ::: Multi-stage or Multi-task",
"Discussion ::: Multi-steps reasoning is important",
"Discussion ::: Could the source dataset be benefited?",
"Discussion ::: Error Analysis",
"Related Work",
"Conclusions"
],
"paragraphs": [
[
"Building a system that comprehends text and answers questions is challenging but fascinating, which can be used to test the machine's ability to understand human language BIBREF0, BIBREF1. Many machine reading comprehension (MRC) based question answering (QA) scenarios and datasets have been introduced over the past few years, which differ from each other in various ways, including the source and format of the context documents, whether external knowledge is needed, the format of the answer, to name a few. We can divide these QA tasks into two categories: 1) extractive/abstractive QA such as SQuAD BIBREF2, and HotPotQA BIBREF3. 2) multiple-choice QA (MCQA) tasks such as MultiRC BIBREF4, and MCTest BIBREF5.",
"In comparison to extractive/abstractive QA tasks, the answers of the MCQA datasets are in the form of open, natural language sentences and not restricted to spans in text. Various question types exist such as arithmetic, summarization, common sense, logical reasoning, language inference, and sentiment analysis. Therefore it requires more advanced reading skills for the machine to perform well on this task. Table TABREF1 shows one example from one of MCQA datasets, DREAM BIBREF6. To answer the first question in Table TABREF1, the system needs to comprehend the whole dialogue and use some common sense knowledge to infer that such a conversation can only happen between classmates rather than brother and sister. For the second question, the implicit inference relationship between the utterance “You'll forget your head if you're not careful.” in the passage and the answer option “He is too careless.” must be figured out by the model to obtain the correct answer. Many MCQA datasets were collected from language or science exams, which were purposely designed by educational experts and consequently require non-trivial reasoning techniques BIBREF7. As a result, the performance of machine readers on these tasks can more accurately gauge comprehension ability of a model.",
"Recently large and powerful pre-trained language models such as BERT BIBREF8 have been achieving the state-of-the-art (SOTA) results on various tasks, however, its potency on MCQA datasets has been severely limited by the data insufficiency. For example, the MCTest dataset has two variants: MC160 and MC500, which are curated in a similar way, and MC160 is considered easier than MC500 BIBREF9. However, BERT-based models perform much worse on MC160 compared with MC500 (8–10% gap) since the data size of the former is about three times smaller. To tackle this issue, we investigate how to improve the generalization of BERT-based MCQA models with the constraint of limited training data using four representative MCQA datasets: DREAM, MCTest, TOEFL, and SemEval-2018 Task 11.",
"We proposed MMM, a Multi-stage Multi-task learning framework for Multi-choice question answering. Our framework involves two sequential stages: coarse-tuning stage using out-of-domain datasets and multi-task learning stage using a larger in-domain dataset. For the first stage, we coarse-tuned our model with natural language inference (NLI) tasks. For the second multi-task fine-tuning stage, we leveraged the current largest MCQA dataset, RACE, as the in-domain source dataset and simultaneously fine-tuned the model on both source and target datasets via multi-task learning. Through extensive experiments, we demonstrate that the two-stage sequential fine-tuning strategy is the optimal choice for BERT-based model on MCQA datasets. Moreover, we also proposed a Multi-step Attention Network (MAN) as the top-level classifier instead of the typical fully-connected neural network for this task and obtained better performance. Our proposed method improves BERT-based baseline models by at least 7% in absolute accuracy for all the MCQA datasets (except the SemEval dataset that already achieves 88.1% for the baseline). As a result, by leveraging BERT and its variant, RoBERTa BIBREF10, our approach advanced the SOTA results for all the MCQA datasets, surpassing the previous SOTA by at least 16% in absolute accuracy (except the SemEval dataset)."
],
[
"In MCQA, the inputs to the model are a passage, a question, and answer options. The passage, denoted as $P$, consists of a list of sentences. The question and each of the answer options, denoted by $Q$ and $O$, are both single sentences. A MCQA model aims to choose one correct answer from answer options based on $P$ and $Q$."
],
[
"Figure FIGREF3 illustrates the model architecture. Specifically, we concatenate the passage, question and one of the answer options into a long sequence. For a question with $n$ answer options, we obtain $n$ token sequences of length $l$. Afterwards, each sequence will be encoded by a sentence encoder to get the representation vector $H \\in \\mathbb {R}^{d\\times l}$, which is then projected into a single value $p=C(H)$ ($p\\in \\mathbb {R}^{1}$) via a top-level classifier $C$. In this way, we obtain the logit vector $\\mathbf {p}=[p_1,p_2,...,p_n]$ for all options of a question, which is then transformed into the probability vector through a softmax layer. We choose the option with highest logit value $p$ as the answer. Cross entropy loss is used as the loss function. We used the pre-trained bidirectional transformer encoder, i.e., BERT and RoBERTa as the sentence encoder. The top-level classifier will be detailed in the next subsection."
],
[
"For the top-level classifier upon the sentence encoder, the simplest choice is a two-layer full-connected neural network (FCNN), which consist of one hidden layer with $tanh$ activation and one output layer without activation. This has been widely adopted when BERT is fine-tuned for the down-streaming classification tasks and performs very well BIBREF8. Inspired from the success of the attention network widely used in the span-based QA task BIBREF11, we propose the multi-step attention network (MAN) as our top-level classifier. Similar to the dynamic or multi-hop memory network BIBREF12, BIBREF13, MAN maintains a state and iteratively refines its prediction via the multi-step reasoning.",
"The MAN classifier works as follows. A pair of question and answer option together is considered as a whole segment, denoted as $QO$. Suppose the sequence length of the passage is $p$ and that of the question and option pair is $q$. We first construct the working memory of the passage $H^P\\in \\mathbb {R}^{d\\times p}$ by extracting the hidden state vectors of the tokens that belong to $P$ from $H$ and concatenating them together in the original sequence order. Similarly, we obtain the working memory of the (question, option) pair, denoted as $H^{QO}\\in \\mathbb {R}^{d\\times q}$. Alternatively, we can also encode the passage and (question, option) pair individually to get their representation vectors $H^P$ and $H^{QO}$, but we found that processing them in a pair performs better.",
"We then perform $K$-step reasoning over the memory to output the final prediction. Initially, the initial state $\\mathbf {s}^0$ in step 0 is the summary of $H^P$ via self-attention: $\\mathbf {s}^0=\\sum _i \\alpha _i H_i^P$, where $\\alpha _i=\\frac{exp(w_1^TH_i^P)}{\\sum _j exp(w_1^TH_j^P)}$. In the following steps $k \\in {1,2,...,K-1}$, the state is calculated by:",
"where $\\mathbf {x}^k=\\sum _i\\beta _iH_i^{QO}$ and $\\beta _i=\\frac{exp(w_2^T[\\mathbf {s}^{k-1};H_i^{QO}])}{\\sum _j exp(w_2^T[\\mathbf {s}^{k-1};H_j^{QO}])}$. Here $[x;y]$ is concatenation of the vectors $x$ and $y$. The final logit value is determined using the last step state:",
"Basically, the MAN classifier calculates the attention scores between the passage and (question, option) pair step by step dynamically such that the attention can refine itself through several steps of deliberation. The attention mechanism can help filter out irrelevant information in the passage against (question, option) pair."
],
[
"We adopt a two-stage procedure to train our model with both in-domain and out-of-domain datasets as shown in Figure FIGREF10."
],
[
"We first fine-tune the sentence encoder of our model with natural language inference (NLI) tasks. For exploration, we have also tried to fine-tune the sentence encoder on other types of tasks such as sentiment analysis, paraphrasing, and span-based question answering at this stage. However, we found that only NLI task shows robust and significant improvements for our target multi-choice task. See Section SECREF5 for details."
],
[
"After corase-tuning stage, we simultaneously fine-tune our model on a large in-domain source dataset and the target dataset together via multi-task learning. We share all model parameters including the sentence encoder as well as the top-level classifier for these two datasets."
],
[
"We use four MCQA datasets as the target datasets: DREAM BIBREF6, MCTest BIBREF9, TOEFL BIBREF5, and SemEval-2018 Task 11 BIBREF14, which are summarized in Table TABREF11. For the first coarse-tuning stage with NLI tasks, we use MultiNLI BIBREF15 and SNLI BIBREF16 as the out-of-domain source datasets. For the second stage, we use the current largest MCQA dataset, i.e., RACE BIBREF7 as in-domain source dataset. For all datasets, we use the official train/dev/test splits."
],
[
"Passages in DREAM dataset are dialogues between two persons or more. Every utterance in a dialogue starts with the speaker name. For example, in utterance “m: How would he know?”, “m” is the abbreviation of “man” indicating that this utterance is from a man. More than 90% utterances have the speaker names as “w,” “f,” and “m,” which are all abbreviations. However, the speaker names mentioned in the questions are full names such as “woman” and “man.” In order to make it clear for the model to learn which speaker the question is asking about, we used a speaker normalization strategy by replacing “w” or “f” with “woman” and “m” with “man” for the speaker names in the utterances. We found this simple strategy is quite effective, providing us with 1% improvement. We will always use this strategy for the DREAM dataset for our method unless explicitly mentioned."
],
[
"For the multi-task learning stage, at each training step, we randomly selected a dataset from the two datasets (RACE and the target dataset) and then randomly fetched a batch of data from that dataset to train the model. This process was repeated until the predefined maximum number of steps or the early stopping criterion has been met. We adopted the proportional sampling strategy, where the probability of sampling a task is proportional to the relative size of each dataset compared to the cumulative size of all datasets BIBREF17."
],
[
"We used a linear learning rate decay schedule with warm-up proportion of $0.1$. We set the dropout rate as $0.1$. The maximum sequence length is set to 512. We clipped the gradient norm to 5 for DREAM dataset and 0 for other datasets. The learning rate and number of training epochs vary for different datasets and encoder types, which are summarized in Section 1 of the Supplementary Material.",
"More than 90% of passages have more than 512 words in the TOEFL dataset, which exceed the maximum sequence length that BERT supports, thus we cannot process the whole passage within one forward pass. To solve this issue, we propose the sliding window strategy, in which we split the long passage into several snippets of length 512 with overlaps between subsequent snippets and each snippet from the same passage will be assigned with the same label. In training phase, all snippets will be used for training, and in inference phase, we aggregate the logit vectors of all snippets from the same passage and pick the option with highest logit value as the prediction. In experiments, we found the overlap of 256 words is the optimal, which can improve the BERT-Base model from accuracy of 50.0% to 53.2%. We adopted this sliding window strategy only for the TOEFL dataset."
],
[
"We first evaluate our method on the DREAM dataset. The results are summarized in Table TABREF16. In the table, we first report the accuracy of the SOTA models in the leaderboard. We then report the performance of our re-implementation of fine-tuned models as another set of strong baselines, among which the RoBERTa-Large model has already surpassed the previous SOTA. For these baselines, the top-level classifier is a two-layer FCNN for BERT-based models and a one-layer FCNN for the RoBERTa-Large model. Lastly, we report model performances that use all our proposed method, MMM (MAN classifier + speaker normalization + two stage learning strategies). As direct comparisons, we also list the accuracy increment between MMM and the baseline with the same sentence encoder marked by the parentheses, from which we can see that the performance augmentation is over 9% for BERT-Base and BERT-Large. Although the RoBERTa-Large baseline has already outperformed the BERT-Large baseline by around 18%, MMM gives us another $\\sim $4% improvement, pushing the accuracy closer to the human performance. Overall, MMM has achieved a new SOTA, i.e., test accuracy of 88.9%, which exceeds the previous best by 16.9%.",
"We also test our method on three other MCQA datasets: MCTest including MC160 and MC500, TOEFL, and SemEval-2018 Task 11. The results are summarized in Table TABREF17. Similarly, we list the previous SOTA models with their scores for comparison. We compared our method with the baselines that use the same sentence encoder. Except for the SemEval dataset, our method can improve the BERT-Large model by at least 10%. For both MCTest and SemEval datasets, our best scores are very close to the reported human performance. The MC160 and MC500 datasets were curated in almost the same way BIBREF9 with only one difference that MC160 is around three times smaller than MC500. We can see from Table TABREF17 that both the BERT and RoBERTa baselines perform much worse on MC160 than MC500. We think the reason is that the data size of MC160 is not enough to well fine-tune the large models with a huge amount of trainable parameters. However, by leveraging the transfer learning techniques we proposed, we can significantly improve the generalization capability of BERT and RoBERTa models on the small datasets so that the best performance of MC160 can even surpass that of MC500. This demonstrates the effectiveness of our method.",
"To better understand why MMM can be successful, we conducted an ablation study be removing one feature at a time on the BERT-Base model. The results are shown in Table TABREF18. We see that the removal of the second stage multi-task learning part hurts our method most significantly, indicating that the majority of improvement is coming from the knowledge transferred from the in-domain dataset. The first stage of coarse-tuning using NLI datasets is also very important, which provides the model with enhanced language inference ability. As for the top-level classifier, i.e., the MAN module, if we replace it with a typical two-layer FCNN as in BIBREF8, we have 1–2% performance drop. Lastly, for the DREAM dataset, the speaker normalization strategy gives us another $\\sim $1% improvement."
],
[
"As shown in Table TABREF18, coarse-tuning on NLI tasks can help improve the performance of MCQA. We conjecture one of the reasons is that, in order to pick the correct answer, we need to rely on the language inference capability in many cases. As an example in Table TABREF1, the utterance highlighted in the bold and italic font in the dialogue is the evidence sentence from which we can obtain the correct answer to Question 2. There is no token overlap between the evidence sentence and the correct answer, indicating that the model cannot solve this question by surface matching. Nevertheless, the correct answer is an entailment to the evidence sentence while the wrong answers are not. Therefore, the capability of language inference enables the model to correctly predict the answer. On the other hand, we can deem the passage and the pair of (question, answer) as a pair of premise and hypothesis. Then the process of choosing the right answer to a certain question is similar to the process of choosing the hypothesis that can best entail the premise. In this sense, the part of MCQA task can be deemed as a NLI task. This also agrees with the argument that NLI is a fundamental ability of a natural language processing model and it can help support other tasks that require higher level of language processing abilities BIBREF21. We provided several more examples that require language inference reading skills in the Section 2 of the Supplementary Material; they are wrongly predicted by the BERT-Base baseline model but can be correctly solved by exposing the model to NLI data with the coarse-tuning stage."
],
[
"By analyzing the MCQA datasets, we found that some questions ask about the attitude of one person towards something and in some cases, the correct answer is simply a paraphrase of the evidence sentence in the passage. This finding naturally leads to the question: could other kinds of tasks such as sentiment classification, paraphrasing also help with MCQA problems?",
"To answer this question, we select several representative datasets for five categories as the up-stream tasks: sentiment analysis, paraphrase, span-based QA, NLI, and MCQA. We conduct experiments where we first train the BERT-Base models on each of the five categories and then further fine-tune our models on the target dataset: DREAM and MC500 (MCTest-MC500). For the sentiment analysis category, we used the Stanford Sentiment Treebank (SST-2) dataset from the GLUE benchmark BIBREF22 (around 60k train examples) and the Yelp dataset (around 430k train examples). For the paraphrase category, three paraphrasing datasets are used from the GLUE benchmark: Microsoft Research Paraphrase Corpus (MRPC), Semantic Textual Similarity Benchmark (STS-B), and Quora Question Pairs (QQP), which are denoted as “GLUE-Para.”. For the span-based QA, we use the SQuAD 1.1, SQuAD 2.0 , and MRQA which is a joint dataset including six popular span-based QA datasets. Table TABREF23 summarizes the results. We see that sentiment analysis datasets do not help much with our target MCQA datasets. But the paraphrase datasets do bring some improvements for MCQA. For span-based QA, only SQuAD 2.0 helps to improve the performance of the target dataset. Interestingly, although MRQA is much larger than other QA datasets (at least six times larger), it makes the performance worst. This suggests that span-based QA might not the appropriate source tasks for transfer learning for MCQA. We hypothesis this could due to the fact that most of the questions are non-extractive (e.g., 84% of questions in DREAM are non-extractive) while all answers are extractive in the span-based QA datasets.",
"For the completeness of our experiments, we also used various NLI datasets: MultiNLI, SNLI, Question NLI (QLI), Recognizing Textual Entailment (RTE), and Winograd NLI (WNLI) from the GLUE benchmark. We used them in three kinds of combinations: MultiNLI alone, MultiNLI plus SNLI denoted as “NLI”, and combining all five datasets together, denoted as “GLUE-NLI”. As the results shown in Table TABREF23, NLI and GLUE-NLI are comparable and both can improve the target dataset by a large margin.",
"Lastly, among all these tasks, using the MCQA task itself, i.e., pretraining on RACE dataset, can help boost the performance, most. This result agrees with the intuition that the in-domain dataset can be the most ideal data for transfer learning.",
"In conclusion, we find that for out-of-domain datasets, the NLI datasets can be most helpful to the MCQA task, indicating that the natural language inference capability should be an important foundation of the MCQA systems. Besides, a larger in-domain dataset, i.e. another MCQA dataset, can also be very useful."
],
[
"The first stage of coarse-tuning with NLI data can not only improve the accuracy but also help the model converge faster and better. Especially for the BERT-Large and RoBERTa-Large models that have much larger amount of trainable parameters, convergence is very sensitive to the optimization settings. However, with the help of NLI datasets , convergence for large models is no longer an issue, as shown in Figure FIGREF25. Under the same optimization hyper-parameters, compared with the baseline, coarse-tuning can make the training loss of the BERT-Base model decrease much faster. More importantly, for the BERT-Large model, without coarse-tuning, the model does not converge at all at the first several epochs, which can be completely resolved by the help of NLI data."
],
[
"In a typical scenario where we have one source and one target dataset, we naturally have a question about whether we should simultaneously train a model on them via multi-task learning or first train on the source dataset then on the target sequentially. Many previous works adopted the latter way BIBREF19, BIBREF20, BIBREF23 and BIBREF20 demonstrated that the sequential fine-tuning approach outperforms the multi-task learning setting in their experiments. However, we had contradictory observations in our experiments. Specifically, we conducted a pair of control experiments: one is that we first fine-tune the BERT-Base model on the source dataset RACE and then further fine-tune on the target dataset, and the other is that we simultaneously train the model on RACE and the target dataset via multi-task learning. The comparison results are shown in Table TABREF27. We see that compared with sequential fine-tuning, the multi-task learning achieved better performance. We conjecture that in the sequential fine-tuning setting, while the model is being fine-tuned on the target dataset, some information or knowledge learned from the source dataset may be lost since the model is no longer exposed to the source dataset in this stage. In comparison, this information can be kept in the multi-task learning setting and thus can better help improve the target dataset.",
"Now that the multi-task learning approach outperforms the sequential fine-tuning setting, we naturally arrive at another question: what if we merged the coarse-tuning and multi-task learning stages together? That is, what if we simultaneously trained the NLI, source, and target datasets altogether under the multi-task learning framework? We also conducted a pair of control experiments for investigation. The results in Table TABREF27, show that casting the fine-tuning process on three datasets into separate stages performs better, indicating that multi-stage training is also necessary. This verifies our MMM framework with coarse-tuning on out-of-domain datasets and fine-tuning on in-domain datesets."
],
[
"Previous results show that the MAN classifier shows improvement compared with the FCNN classifier, but we are also interested in how the performance change while varying the number of reasoning steps $K$ as shown in Figure FIGREF29. $K=0$ means that we do not use MAN but FCNN as the classifier. We observe that there is a gradual improvement as we increase $K=1$ to $K=5$, but after 5 steps the improvements have saturated. This verifies that an appropriate number of steps of reasoning is important for the memory network to reflect its benefits."
],
[
"So far we have been discussing the case where we do multi-task learning with the source dataset RACE and various much smaller target datasets to help improve the targets. We also want to see whether our proposed techniques can also benefit the source dataset itself. Table TABREF31 summarizes the results of BERT-Base model on the RACE dataset obtained by adding the coarse-tuning stage, adding the multi-task training together with DREAM, and adding the MAN module. From this table, we see that all three techniques can bring in improvements over the baseline model for the source dataset RACE, among which NLI coarse-tuning stage can help elevate the scores most.",
"Since we found all parts of MMM can work well for the source dataset, we tried to use them to improve the accuracy on RACE. The results are shown in Table TABREF32. We used four kinds of pre-trained sentence encoders: BERT-Base, BERT-Large, XLNet-Large, and RoBERTa-Large. For each encoder, we listed the official report of scores from the leaderboard. Compared with the baselines, MMM leads to improvements ranging from 0.5% to 3.0% in accuracy. Our best result is obtained by the RoBERTa-Large encoder."
],
[
"In order to investigate how well our model performs for different types of questions, we did an error analysis by first randomly selecting 150 samples that had wrong predictions by the BERT-Base baseline model from the development set of DREAM dataset. We then manually classified them into several question types, as shown in Table TABREF34. The annotation criterion is described in the Section 3 of the Supplementary Material. We see that the BERT-Base baseline model still does not do well on matching problems. We then evaluate our best model on these samples and report the accuracy of each question type in the last column of Table TABREF34. We find that our best model can improve upon every question type significantly especially for the matching problems, and most surprisingly, our best model can even greatly improve its ability on solving the arithmetic problems, achieving the accuracy of 73.7%.",
"However, could our model really do math? To investigate this question, we sampled some arithmetic questions that are correctly predicted by our model, made small alterations to the passage or question, and then checked whether our model can still make correct choices. We found our model is very fragile to these minor alterations, implicating that the model is actually not that good at arithmetic problems. We provided one interesting example in the Section 3 of the Supplementary Material."
],
[
"There are increasing interests in machine reading comprehension (MRC) for question answering (QA). The extractive QA tasks primarily focus on locating text spans from the given document/corpus to answer questions BIBREF2. Answers in abstractive datasets such as MS MARCO BIBREF24, SearchQA BIBREF25, and NarrativeQA BIBREF26 are human-generated and based on source documents or summaries in free text format. However, since annotators tend to copy spans as answers BIBREF27, the majority of answers are still extractive in these datasets. The multi-choice QA datasets are collected either via crowd sourcing, or collected from examinations designed by educational experts BIBREF7. In this type of QA datasets, besides token matching, a significant portion of questions require multi-sentence reasoning and external knowledge BIBREF5.",
"Progress of research for MRC first relies on the breakthrough of the sentence encoder, from the basic LSTM to the pre-trained transformer based model BIBREF8, which has elevated the performance of all MRC models by a large margin. Besides, the attention mechanisms between the context and the query can empower the neural models with higher performance BIBREF11. In addition, some techniques such as answer verification BIBREF28, multi-hop reasoning BIBREF29, and synthetic data augmentation can be also helpful.",
"Transfer learning has been widely proved to be effective across many domain in NLP. In the QA domain, the most well-known example of transfer learning would be fine-tuning the pre-trained language model such as BERT to the down-streaming QA datasets such as SQuAD BIBREF8. Besides, multi-task learning can also be deemed as a type of transfer learning, since during the training of multiple datasets from different domains for different tasks, knowledge will be shared and transferred from each task to others, which has been used to build a generalized QA model BIBREF30. However, no previous works have investigated that the knowledge from the NLI datasets can also be transferred to improve the MCQA task."
],
[
"We propose MMM, a multi-stage multi-task transfer learning method on the multiple-choice question answering tasks. Our two-stage training strategy and the multi-step attention network achieved significant improvements for MCQA. We also did detailed analysis to explore the importance of both our training strategies as well as different kinds of in-domain and out-of-domain datasets. We hope our work here can also shed light on new directions for other NLP domains."
]
]
}
|
{
"question": [
"How big are improvements of MMM over state of the art?",
"What out of domain datasets authors used for coarse-tuning stage?",
"What are state of the art methods MMM is compared to?",
"What four representative datasets are used for bechmark?"
],
"question_id": [
"53d6cbee3606dd106494e2e98aa93fdd95920375",
"9dc844f82f520daf986e83466de0c84d93953754",
"9fe4a2a5b9e5cf29310ab428922cc8e7b2fc1d11",
"36d892460eb863220cd0881d5823d73bbfda172c"
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
""
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"test accuracy of 88.9%, which exceeds the previous best by 16.9%"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We first evaluate our method on the DREAM dataset. The results are summarized in Table TABREF16. In the table, we first report the accuracy of the SOTA models in the leaderboard. We then report the performance of our re-implementation of fine-tuned models as another set of strong baselines, among which the RoBERTa-Large model has already surpassed the previous SOTA. For these baselines, the top-level classifier is a two-layer FCNN for BERT-based models and a one-layer FCNN for the RoBERTa-Large model. Lastly, we report model performances that use all our proposed method, MMM (MAN classifier + speaker normalization + two stage learning strategies). As direct comparisons, we also list the accuracy increment between MMM and the baseline with the same sentence encoder marked by the parentheses, from which we can see that the performance augmentation is over 9% for BERT-Base and BERT-Large. Although the RoBERTa-Large baseline has already outperformed the BERT-Large baseline by around 18%, MMM gives us another $\\sim $4% improvement, pushing the accuracy closer to the human performance. Overall, MMM has achieved a new SOTA, i.e., test accuracy of 88.9%, which exceeds the previous best by 16.9%."
],
"highlighted_evidence": [
"Overall, MMM has achieved a new SOTA, i.e., test accuracy of 88.9%, which exceeds the previous best by 16.9%."
]
}
],
"annotation_id": [
"11e9dc8da152c948ba3f0ed165402dffad6fae49"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"MultiNLI BIBREF15 and SNLI BIBREF16 "
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We use four MCQA datasets as the target datasets: DREAM BIBREF6, MCTest BIBREF9, TOEFL BIBREF5, and SemEval-2018 Task 11 BIBREF14, which are summarized in Table TABREF11. For the first coarse-tuning stage with NLI tasks, we use MultiNLI BIBREF15 and SNLI BIBREF16 as the out-of-domain source datasets. For the second stage, we use the current largest MCQA dataset, i.e., RACE BIBREF7 as in-domain source dataset. For all datasets, we use the official train/dev/test splits."
],
"highlighted_evidence": [
"For the first coarse-tuning stage with NLI tasks, we use MultiNLI BIBREF15 and SNLI BIBREF16 as the out-of-domain source datasets. "
]
}
],
"annotation_id": [
"fb11cb05fe3d851cc4d17da20a5b958dad0af096"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "FTLM++, BERT-large, XLNet",
"evidence": [
"FLOAT SELECTED: Table 3: Accuracy on the DREAM dataset. Performance marked by ? is reported by (Sun et al. 2019). Numbers in parentheses indicate the accuracy increased by MMM compared to the baselines."
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Accuracy on the DREAM dataset. Performance marked by ? is reported by (Sun et al. 2019). Numbers in parentheses indicate the accuracy increased by MMM compared to the baselines."
]
}
],
"annotation_id": [
"6f65f4be18453d162510778c0b8c582ffc5f27f7"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"DREAM, MCTest, TOEFL, and SemEval-2018 Task 11"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Recently large and powerful pre-trained language models such as BERT BIBREF8 have been achieving the state-of-the-art (SOTA) results on various tasks, however, its potency on MCQA datasets has been severely limited by the data insufficiency. For example, the MCTest dataset has two variants: MC160 and MC500, which are curated in a similar way, and MC160 is considered easier than MC500 BIBREF9. However, BERT-based models perform much worse on MC160 compared with MC500 (8–10% gap) since the data size of the former is about three times smaller. To tackle this issue, we investigate how to improve the generalization of BERT-based MCQA models with the constraint of limited training data using four representative MCQA datasets: DREAM, MCTest, TOEFL, and SemEval-2018 Task 11."
],
"highlighted_evidence": [
"To tackle this issue, we investigate how to improve the generalization of BERT-based MCQA models with the constraint of limited training data using four representative MCQA datasets: DREAM, MCTest, TOEFL, and SemEval-2018 Task 11."
]
}
],
"annotation_id": [
"605df693493ead557174f3a1ebb05efb09517f15"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
]
}
|
{
"caption": [
"Table 1: Data samples of DREAM dataset. ( √ : the correct answer)",
"Figure 1: Model architecture. “Encoder”is a pre-trained sentence encoder such as BERT. “Classifier” is a top-level classifier.",
"Figure 2: Multi-stage and multi-task fine-tuning strategy.",
"Table 2: Statistics of MCQA datasets. (crowd.: crowd-sourcing; ?: answer options are not text snippets from reference documents.)",
"Table 3: Accuracy on the DREAM dataset. Performance marked by ? is reported by (Sun et al. 2019). Numbers in parentheses indicate the accuracy increased by MMM compared to the baselines.",
"Table 4: Performance in accuracy (%) on test sets of other datasets: MCTest (MC160 and MC500), TOEFL, and SemEval. Performance marked by ? is reported by (Richardson, Burges, and Renshaw 2013) and that marked by † is from (Ostermann et al. 2018). Numbers in the parentheses indicate the accuracy increased by MMM. “-B” means the base model and “-L” means the large model.",
"Table 5: Ablation study on the DREAM and MCTest-MC160 (MC160) datasets. Accuracy (%) is on the development set.",
"Table 6: Transfer learning results for DREAM and MC500. The BERT-Base model is first fine-tuned on each source dataset and then further fine-tuned on the target dataset. Accuracy is on the the development set. A two-layer FCNN is used as the classifier.",
"Table 7: Comparison between multi-task learning and sequential fine-tuning. BERT-Base model is used and the accuracy is on the development set. Target refers to the target dataset in transfer learning. A two-layer FCNN instead of MAN is used as the classifier.",
"Figure 3: Train loss curve with respect to optimization steps. With prior coarse-tuning on NLI data, convergence becomes much faster and easier.",
"Figure 4: Effects of the number of reasoning steps for the MAN classifier. 0 steps means using FCNN instead of MAN. The BERTBase model and DREAM dataset are used.",
"Table 8: Ablation study for the RACE dataset. The accuracy is on the development set. All parts of MMM improve this source dataset.",
"Table 9: Comparison of the test accuracy of the RACE dataset between our approach MMM and the official reports that are from the dataset leaderboard.",
"Table 10: Error analysis on DREAM. The column of “Percent” reports the percentage of question types among 150 samples that are from the development set of DREAM dataset that are wrongly predicted by the BERT-Base baseline model. The column of “Accuracy” reports the accuracy of our best model (RoBERTa-Large+MMM) on these samples."
],
"file": [
"1-Table1-1.png",
"2-Figure1-1.png",
"3-Figure2-1.png",
"4-Table2-1.png",
"4-Table3-1.png",
"5-Table4-1.png",
"5-Table5-1.png",
"5-Table6-1.png",
"6-Table7-1.png",
"6-Figure3-1.png",
"6-Figure4-1.png",
"7-Table8-1.png",
"7-Table9-1.png",
"7-Table10-1.png"
]
}
|
2001.11268
|
Data Mining in Clinical Trial Text: Transformers for Classification and Question Answering Tasks
|
This research on data extraction methods applies recent advances in natural language processing to evidence synthesis based on medical texts. Texts of interest include abstracts of clinical trials in English and in multilingual contexts. The main focus is on information characterized via the Population, Intervention, Comparator, and Outcome (PICO) framework, but data extraction is not limited to these fields. Recent neural network architectures based on transformers show capacities for transfer learning and increased performance on downstream natural language processing tasks such as universal reading comprehension, brought forward by this architecture's use of contextualized word embeddings and self-attention mechanisms. This paper contributes to solving problems related to ambiguity in PICO sentence prediction tasks, as well as highlighting how annotations for training named entity recognition systems are used to train a high-performing, but nevertheless flexible architecture for question answering in systematic review automation. Additionally, it demonstrates how the problem of insufficient amounts of training annotations for PICO entity extraction is tackled by augmentation. All models in this paper were created with the aim to support systematic review (semi)automation. They achieve high F1 scores, and demonstrate the feasibility of applying transformer-based classification methods to support data mining in the biomedical literature.
|
{
"section_name": [
"INTRODUCTION",
"INTRODUCTION ::: Tools for SR automation and PICO classification",
"INTRODUCTION ::: Sentence classification data",
"INTRODUCTION ::: Question answering data ::: SQuAD",
"INTRODUCTION ::: Question answering data ::: Ebm-nlp",
"INTRODUCTION ::: Introduction to transformers",
"INTRODUCTION ::: Weaknesses in the previous sentence classification approach",
"INTRODUCTION ::: Contributions of this research",
"METHODOLOGY ::: Feature representation and advantages of contextualization",
"METHODOLOGY ::: Sentence classification ::: Preparation of the data",
"METHODOLOGY ::: Sentence classification ::: Fine-tuning",
"METHODOLOGY ::: Sentence classification ::: Post-training assignment of classes",
"METHODOLOGY ::: Question answering ::: Preparation of the data",
"METHODOLOGY ::: Question answering ::: Fine-tuning",
"RESULTS ::: Feature representation and contextualization",
"RESULTS ::: Sentence classification",
"RESULTS ::: Question answering",
"DISCUSSION",
"DISCUSSION ::: Limitations",
"CONCLUSION",
"ACKNOWLEDGEMENTS",
"FUNDING",
"Availability of the code and data"
],
"paragraphs": [
[
"Systematic reviews (SR) of randomized controlled trials (RCTs) are regarded as the gold standard for providing information about the effects of interventions to healthcare practitioners, policy makers and members of the public. The quality of these reviews is ensured through a strict methodology that seeks to include all relevant information on the review topic BIBREF0.",
"A SR, as produced by the quality standards of Cochrane, is conducted to appraise and synthesize all research for a specific research question, therefore providing access to the best available medical evidence where needed BIBREF1. The research question is specified using the PICO (population; intervention; comparator; outcomes) framework. The researchers conduct very broad literature searches in order to retrieve every piece of clinical evidence that meets their review's inclusion criteria, commonly all RCTs of a particular healthcare intervention in a specific population. In a search, no piece of relevant information should be missed. In other words, the aim is to achieve a recall score of one. This implies that the searches are broad BIBREF2, and authors are often left to screen a large number of abstracts manually in order to identify a small fraction of relevant publications for inclusion in the SR BIBREF3.",
"The number of RCTs is increasing, and with it increases the potential number of reviews and the amount of workload that is implied for each. Research on the basis of PubMed entries shows that both the number of publications and the number of SRs increased rapidly in the last ten years BIBREF4, which is why acceleration of the systematic reviewing process is of interest in order to decrease working hours of highly trained researchers and to make the process more efficient.",
"",
"In this work, we focus on the detection and annotation of information about the PICO elements of RCTs described in English PubMed abstracts. In practice, the comparators involved in the C of PICO are just additional interventions, so we often refer to PIO (populations; interventions; outcomes) rather than PICO. Focus points for the investigation are the problems of ambiguity in labelled PIO data, integration of training data from different tasks and sources and assessing our model's capacity for transfer learning and domain adaptation.",
"Recent advances in natural language processing (NLP) offer the potential to be able to automate or semi-automate the process of identifying information to be included in a SR. For example, an automated system might attempt to PICO-annotate large corpora of abstracts, such as RCTs indexed on PubMed, or assess the results retrieved in a literature search and predict which abstract or full text article fits the inclusion criteria of a review. Such systems need to be able to classify and extract data of interest. We show that transformer models perform well on complex data-extraction tasks. Language models are moving away from the semantic, but static representation of words as in Word2Vec BIBREF5, hence providing a richer and more flexible contextualized representation of input features within sentences or long sequences of text.",
"The rest of this paper is organized as follows. The remainder of this section introduces related work and the contributions of our work. Section 2 describes the process of preparing training data, and introduces approaches to fine-tuning for sentence classification and question answering tasks. Results are presented in section 3, and section 4 includes a critical evaluation and implications for practice."
],
[
"The website systematicreviewtools.com BIBREF6 lists 36 software tools for study selection to date. Some tools are intended for organisational purposes and do not employ PICO classification, such as Covidence BIBREF7. The tool Rayyan uses support vector machines BIBREF8. RobotReviewer uses neural networks, word embeddings and recently also a transformer for named entity recognition (NER) BIBREF9. Question answering systems for PICO data extraction exist based on matching words from knowledge bases, hand-crafted rules and naïve Bayes classification, both on entity and sentence level BIBREF10, BIBREF11, but commonly focus on providing information to practicing clinicians rather than systematic reviewers BIBREF12.",
"In the following we introduce models related to our sentence and entity classification tasks and the data on which our experiments are based. We made use of previously published training and testing data in order to ensure comparability between models."
],
[
"In the context of systematic review (semi)automation, sentence classification can be used in the screening process, by highlighting relevant pieces of text. A long short-term memory (LSTM) neural network trained with sentences of structured abstracts from PubMed was published in 2018 BIBREF13. It uses a pre-trained Word2Vec embedding in order to represent each input word as a fixed vector. Due to the costs associated with labelling, its authors acquired sentence labels via automated annotation. Seven classes were assigned on the basis of structured headings within the text of each abstract. Table TABREF4 provides an overview of class abbreviations and their meaning.In the following we refer to it as the PubMed data.",
"The LSTM itself yields impressive results with F1 scores for annotation of up to 0.85 for PIO elements, it generalizes across domains and assigns one label per sentence. We were able to confirm these scores by replicating a local version of this model."
],
[
"The Stanford Question Answering Dataset (SQuAD) is a reading-comprehension dataset for machine learning tasks. It contains question contexts, questions and answers and is available in two versions. The older version contains only questions that can be answered based on the given context. In its newer version, the dataset also contains questions which can not be answered on the basis of the given context. The SQuAD creators provide an evaluation script, as well as a public leader board to compare model performances BIBREF14."
],
[
"In the PICO domain, the potential of NER was shown by Nye and colleagues in using transformers, as well as LSTM and conditional random fields. In the following, we refer to these data as the ebm-nlp corpus. BIBREF15. The ebm-nlp corpus provided us with 5000 tokenized and annotated RCT abstracts for training, and 190 expert-annotated abstracts for testing. Annotation in this corpus include PIO classes, as well as more detailed information such as age, gender or medical condition. We adapted the human-annotated ebm-nlp corpus of abstracts for training our QA-BERT question answering system."
],
[
"In the following, the bidirectional encoder representations from transformers (BERT) architecture is introduced BIBREF16. This architecture's key strengths are rooted in both feature representation and training. A good feature representation is essential to ensure any model's performance, but often data sparsity in the unsupervised training of embedding mechanisms leads to losses in overall performance. By employing a word piece vocabulary, BERT eliminated the problem of previously unseen words. Any word that is not present in the initial vocabulary is split into a sub-word vocabulary. Especially in the biomedical domain this enables richer semantic representations of words describing rare chemical compounds or conditions. A relevant example is the phrase ’two drops of ketorolac tromethamine’, where the initial three words stay intact, while the last words are tokenized to ’ket’, ’#oro’, ’#lac’, ’tro’, ’#meth’, ’#amine’, hence enabling the following model to focus on relevant parts of the input sequence, such as syllables that indicate chemical compounds. When obtaining a numerical representation for its inputs, transformers apply a ’self-attention’ mechanism, which leads to a contextualized representation of each word with respect to its surrounding words.",
"BERT's weights are pre-trained in an unsupervised manner, based on large corpora of unlabelled text and two pre-training objectives. To achieve bidirectionality, its first pre-training objective includes prediction of randomly masked words. Secondly, a next-sentence prediction task trains the model to capture long-term dependencies. Pre-training is computationally expensive but needs to be carried out only once before sharing the weights together with the vocabulary. Fine-tuning to various downstream tasks can be carried out on the basis of comparably small amounts of labelled data, by changing the upper layers of the neural network to classification layers for different tasks.",
"SCIBERT is a model based on the BERT-base architecture, with further pre-trained weights based on texts from the Semantic Scholar search engine BIBREF17. We used these weights as one of our three starting points for fine-tuning a sentence classification architecture BIBREF18. Furthermore, BERT-base (uncased) and Bert multilingual (cased, base architecture) were included in the comparison BIBREF16."
],
[
"In the following, we discuss weaknesses in the PubMed data, and LSTM models trained on this type of labelled data. LSTM architectures commonly employ a trimmed version of Word2Vec embeddings as embedding layer. In our case, this leads to 20% of the input data being represented by generic `Unknown' tokens. These words are missing because they occur so rarely that no embedding vector was trained for them. Trimming means that the available embedding vocabulary is then further reduced to the known words of the training, development and testing data, in order to save memory and increase speed. The percentage of unknown tokens is likely to increase when predicting on previously unseen and unlabelled data. We tested our locally trained LSTM on 5000 abstracts from a study-based register BIBREF19 and found that 36% of all unique input features did not have a known representation.",
"In the case of the labelled training and testing data itself, automatic annotation carries the risk of producing wrongly labelled data. But it also enables the training of neural networks in the first place because manual gold standard annotations for a project on the scale of a LSTM are expensive and time-consuming to produce. As we show later, the automated annotation technique causes noise in the evaluation because as the network learns, it can assign correct tags to wrongly labelled data. We also show that sentence labels are often ambiguous, and that the assignment of a single label limits the quality of the predictions for their use in real-world reviewing tasks.",
"We acknowledge that the assignment of classes such as `Results' or `Conclusions' to sentences is potentially valuable for many use-cases. However, those sentences can contain additional information related to the PICO classes of interest. In the original LSTM-based model the A, M, R, and C data classes in Table TABREF4 are utilized for sequence optimization, which leads to increased classification scores. Their potential PICO content is neglected, although it represents crucial information in real-world reviewing tasks.",
"A general weakness of predicting labels for whole sentences is the practical usability of the predictions. We will show sentence highlighting as a potential use-case for focusing reader's attention to passages of interest. However, the data obtained through this method are not fine-grained enough for usage in data extraction, or for the use in pipelines for automated evidence synthesis. Therefore, we expand our experiments to include QA-BERT, a question-answering model that predicts the locations of PICO entities within sentences."
],
[
"In this work we investigate state-of-the-art methods for language modelling and sentence classification. Our contributions are centred around developing transformer-based fine-tuning approaches tailored to SR tasks. We compare our sentence classification with the LSTM baseline and evaluate the biggest set of PICO sentence data available at this point BIBREF13. We demonstrate that models based on the BERT architecture solve problems related to ambiguous sentence labels by learning to predict multiple labels reliably. Further, we show that the improved feature representation and contextualization of embeddings lead to improved performance in biomedical data extraction tasks. These fine-tuned models show promising results while providing a level of flexibility to suit reviewing tasks, such as the screening of studies for inclusion in reviews. By predicting on multilingual and full text contexts we showed that the model's capabilities for transfer learning can be useful when dealing with diverse, real-world data.",
"In the second fine-tuning approach, we apply a question answering architecture to the task of data extraction. Previous models for PICO question answering relied on vast knowledge bases and hand-crafted rules. Our fine-tuning approach shows that an abstract as context, together with a combination of annotated PICO entities and SQuAD data can result in a system that outperforms contemporary entity recognition systems, while retaining general reading comprehension capabilities."
],
[
"A language processing model's performance is limited by its capability of representing linguistic concepts numerically. In this preliminary experiment, we used the PubMed corpus for sentence classification to show the quality of PICO sentence embeddings retrieved from BERT. We mapped a random selection of 3000 population, intervention, and outcome sentences from the PubMed corpus to BERT-base uncased and SCIBERT. This resulted in each sentence being represented by a fixed length vector of 768 dimensions in each layer respectively, as defined by the model architecture's hidden size. These vectors can be obtained for each of the network's layers, and multiple layers can be represented together by concatenation and pooling. We used the t-distributed Stochastic Neighbour Embedding (t-SNE) algorithm to reduce each layer-embedding into two-dimensional space, and plotted the resulting values. Additionally, we computed adjusted rand scores in order to evaluate how well each layer (or concatenation thereof, always using reduce_mean pooling) represents our input sequence. The rand scores quantify the extent to which a naïve K-means (N=3) clustering algorithm in different layers alone led to correct grouping of the input sentences."
],
[
"We used the PubMed corpus to fine-tune a sentence classification architecture. Class names and abbreviations are displayed in Table TABREF4. The corpus was supplied in pre-processed form, comprising 24,668 abstracts. For more information about the original dataset we refer to its original publication BIBREF13. Because of the PICO framework, methods for systematic review semi(automation) commonly focus on P, I, and O detection. A, M, R, and C classes are an additional feature of this corpus. They were included in the following experiment because they represent important information in abstracts and they occur in a vast majority of published trial text. Their exclusion can lead to false classification of sentences in full abstracts. In a preliminary experiment we summarized A, M, R, and C sentences as a generic class named ’Other’ in order to shift the model's focus to PIO classes. This resulted in high class imbalance, inferior classification scores and a loss of ability to predict these classes when supporting systematic reviewers during the screening process.",
"In the following, abstracts that did not include a P, I, and O label were excluded. This left a total of 129,095 sentences for training, and 14,344 for testing (90:10 split)."
],
[
"We carried out fine-tuning for sentence classification based on BERT-base (uncased), multilingual BERT (cased), and on SCIBERT. We changed the classification layer on top of the original BERT model. It remains as linear, fully connected layer but now employs the sigmoid cross-entropy loss with logits function for optimization. During training, this layer is optimised for predicting probabilities over all seven possible sentence labels. Therefore, this architecture enables multi-class, multi-label predictions. In comparison, the original BERT fine-tuning approach for sentence classification employed a softmax layer in order to obtain multi-class, single-label predictions of the most probable class only. During the training process the model then predicts class labels from Table 1 for each sentence. After each training step, backpropagation then adjusts the model's internal weights. To save GPU resources, a maximal sequence length of 64, batch size 32, learning rate of $2\\times 10^{-5}$, a warm-up proportion of 0.1 and two epochs for training were used."
],
[
"In the scope of the experiments for this paper, the model returns probabilities for the assignment of each class for every sentence. These probabilities were used to show effects of different probability thresholds (or simply assignment to the most probable class) on recall, precision and F1 scores. The number of classes was set to 7, thereby making use of the full PubMed dataset."
],
[
"Both the training and testing subsets from the ebm-nlp data were adapted to fit the SQuAD format. We merged both datasets in order to train a model which firstly correctly answers PICO questions on the basis of being trained with labelled ebm-nlp data, and secondly retains the flexibility of general-purpose question answering on the basis of SQuAD. We created sets of general, differently phrased P, I, and O questions for the purpose of training a broad representation of each PICO element question.",
"In this section we describe the process of adapting the ebm-nlp data to the second version of the SQuAD format, and then augmenting the training data with some of the original SQuAD data. Figure FIGREF19 shows an example of the converted data, together with a high-level software architecture description for our QA-BERT model. We created a conversion script to automate this task. To reduce context length, it first split each ebm-nlp abstract into sentences. For each P, I, and O class it checked the presence of annotated entity spans in the ebm-nlp source files. Then, a question was randomly drawn from our set of general questions for this class, to complete a context and a span-answer pair in forming a new SQuAD-like question element. In cases where a sentence did not contain a span, a question was still chosen, but the answer was marked as impossible, with the plausible answer span set to begin at character 0. In the absence of impossible answers, the model would always return some part of the context as answer, and hence be of no use for rarer entities such as P, which only occurs in only 30% of all context sentences.",
"For the training data, each context can contain one possible answer, whereas for testing multiple question-answer pairs are permitted. An abstract is represented as a domain, subsuming its sentences and question answer-text pairs. In this format, our adapted data are compatible with the original SQuAD v.2 dataset, so we chose varying numbers of original SQuAD items and shuffled them into the training data. This augmentation of the training data aims to reduce the dependency on large labelled corpora for PICO entity extraction. Testing data can optionally be enriched in the same way, but for the presentation of our results we aimed to be comparable with previously published models and therefore chose to evaluate only on the subset of expert-annotated ebm-nlp testing data."
],
[
"The python Huggingface Transformers library was used for fine-tuning the question-answering models. This classification works by adding a span-classification head on top of a pre-trained transformer model. The span-classification mechanism learns to predict the most probable start and end positions of potential answers within a given context BIBREF22.",
"The Transformers library offers classes for tokenizers, BERT and other transformer models and provides methods for feature representation and optimization. We used BertForQuestionAnswering. Training was carried out on Google's Colab, using the GPU runtime option. We used a batch size of 18 per GPU and a learning rate of $3^{-5}$. Training lasted for 2 epochs, context length was limited to 150. To reduce the time needed to train, we only used BERT-base (uncased) weights as starting points, and used a maximum of 200 out of the 442 SQuAD domains.",
"To date, the Transformers library includes several BERT, XLM, XLNet, DistilBERT and ALBERT question answering models that can be fine-tuned with the scripts and data that we describe in this paper."
],
[
"Figure FIGREF23 shows the dimensionality-reduced vectors for 3000 sentences in BERT-base, along with the positions of three exemplary sentences. All three examples were labelled as 'P' in the gold standard. This visualization highlights overlaps between the sentence data and ambiguity or noise in the labels.",
"UTF8bsmi",
"Sentences 1 and 2 are labelled incorrectly, and clearly appear far away from the population class centroid. Sentence 3 is an example of an ambiguous case. It appears very close to the population centroid, but neither its label nor its position reflect the intervention content. This supports a need for multiple tags per sentence, and the fine-tuning of weights within the network.",
"Figure FIGREF23 shows the same set of sentences, represented by concatenations of SCIBERT outputs. SCIBERT was chosen as an additional baseline model for fine-tuning because it provided the best representation of embedded PICO sentences. When clustered, its embeddings yielded an adjusted rand score of 0.57 for a concatenation of the two layers, compared with 0.25 for BERT-base."
],
[
"Precision, recall, and F1 scores, including a comparison with the LSTM, are summarized in Table TABREF22. Underlined scores represent the top score across all models, and scores in bold are the best results for single- and multi-label cases respectively. The LSTM assigns one label only and was outperformed in all classes of main interest (P, I, and O).",
"A potential pitfall of turning this task into multi-label classification is an increase of false-positive predictions, as more labels are assigned than given in the single-labelled testing data in the first place. However, the fine-tuned BERT models achieved high F1 scores, and large improvements in terms of recall and precision. In its last row, Table TABREF22 shows different probability thresholds for class assignment when using the PubMed dataset and our fine-tuned SCIBERT model for multi-label prediction. After obtaining the model's predictions, a simple threshold parameter can be used to obtain the final class labels. On our labelled testing data, we tested 50 evenly spaced thresholds between 0 and 1 in order to obtain these graphs. Here, recall and precision scores in ranges between 0.92 and 0.97 are possible with F1 scores not dropping below 0.84 for the main classes of interest. In practice, the detachment between model predictions and assignment of labels means that a reviewer who wishes to switch between high recall and high precision results can do so very quickly, without obtaining new predictions from the model itself.",
"More visualizations can be found in this project's GitHub repository , including true class labels and a detailed breakdown of true and false predictions for each class. The highest proportion of false classification appears between the results and conclusion classes.",
"The fine-tuned multilingual model showed marginally inferior classification scores on the exclusively English testing data. However, this model's contribution is not limited to the English language because its interior weights embed a shared vocabulary of 100 languages, including German and Chinese. Our evaluation of the multilingual model's capacity for language transfer is of a qualitative nature, as there were no labelled Chinese or German data available. Table TABREF24 shows examples of two abstracts, as predicted by the model. Additionally, this table demonstrates how a sentence prediction model can be used to highlight text. With the current infrastructure it is possible to highlight PICOs selectively, to highlight all classes simultaneously, and to adjust thresholds for class assignment in order to increase or decrease the amount of highlighted sentences. When applied to full texts of RCTs and cohort studies, we found that the model retained its ability to identify and highlight key sentences correctly for each class.",
"",
"We tested various report types, as well as recent and old publications, but remain cautious that large scale testing on labelled data is needed to draw solid conclusions on these model's abilities for transfer learning. For further examples in the English language, we refer to our GitHub repository."
],
[
"We trained and evaluated a model for each P, I, and O class. Table TABREF29 shows our results, indicated as QA-BERT, compared with the currently published leader board for the ebm-nlp data BIBREF25 and results reported by the authors of SCIBERT BIBREF18. For the P and I classes, our models outperformed the results on this leader board. The index in our model names indicates the amount of additional SQuAD domains added to the training data. We never used the full SQuAD data in order to reduce time for training but observed increased performance when adding additional data. For classifying I entities, an increase from 20 to 200 additional SQuAD domains resulted in an increase of 8% for the F1 score, whereas the increase for the O domain was less than 1%. After training a model with 200 additional SQuAD domains, we also evaluated it on the original SQuAD development set and obtained a F1 score of 0.72 for this general reading comprehension task.",
"In this evaluation, the F1 scores represent the overlap of labelled and predicted answer spans on token level. We also obtained scores for the subgroups of sentences that did not contain an answer versus the ones that actually included PICO elements. These results are shown in Table TABREF30.",
"For the P class, only 30% of all sentences included an entity, whereas its sub-classes age, gender, condition and size averaged 10% each. In the remaining classes, these percentages were higher. F1 scores for correctly detecting that a sentence includes no PICO element exceeded 0.92 in all classes. This indicates that the addition of impossible answer elements was successful, and that the model learned a representation of how to discriminate PICO contexts. The scores for correctly predicting PICOs in positive scenarios are lower. These results are presented in Table TABREF30. Here, two factors could influence this score in a negative way. First, labelled spans can be noisy. Training spans were annotated by crowd workers and the authors of the original dataset noted inter-annotator disagreement. Often, these spans include full stops, other punctuation or different levels of detail describing a PICO. The F1 score decreases if the model predicts a PICO, but the predicted span includes marginal differences that were not marked up by the experts who annotated the testing set. Second, some spans include multiple PICOs, sometimes across sentence boundaries. Other spans mark up single PICOS in succession. In these cases the model might find multiple PICOs in a row, and annotate them as one or vice versa."
],
[
"In this work, we have shown possibilities for sentence classification and data extraction of PICO characteristics from abstracts of RCTs.",
"For sentence classification, models based on transformers can predict multiple labels per sentence, even if trained on a corpus that assigns a single label only. Additionally, these architectures show a great level of flexibility with respect to adjusting precision and recall scores. Recall is an important metric in SR tasks and the architectures proposed in this paper enable a post-classification trade-off setting that can be adjusted in the process of supporting reviewers in real-world reviewing tasks.",
"However, tagging whole sentences with respect to populations, interventions and outcomes might not be an ideal method to advance systematic review automation. Identifying a sentence's tag could be helpful for highlighting abstracts from literature searches. This focuses the reader's attention on sentences, but is less helpful for automatically determining whether a specific entity (e.g. the drug aspirin) is mentioned.",
"Our implementation of the question answering task has shown that a substantial amount of PICO entities can be identified in abstracts on a token level. This is an important step towards reliable systematic review automation. With our provided code and data, the QA-BERT model can be switched with more advanced transformer architectures, including XLM, XLNet, DistilBERT and ALBERT pre-trained models. More detailed investigations into multilingual predictions BIBREF26 pre-processing and predicting more than one PICO per sentence are reserved for future work."
],
[
"Limitations in the automatically annotated PubMed training data mostly consist of incomplete detection or noise P, I, and O entities due to the single labelling. We did not have access to multilingual annotated PICO corpora for testing, and therefore tested the model on German abstracts found on PubMed, as well as Chinese data provided by the Cochrane Schizophrenia Group.",
"For the question answering, we limited the use of original SQuAD domains to enrich our data. This was done in order to save computing resources, as an addition of 100 SQuAD domains resulted in training time increases of two hours, depending on various other parameter settings. Adjusted parameters include increased batch size, and decreased maximal context length in order to reduce training time."
],
[
"With this paper we aimed to explore state-of-the-art NLP methods to advance systematic review (semi)automation. Both of the presented fine-tuning approaches for transformers demonstrated flexibility and high performance. We contributed an approach to deal with ambiguity in whole-sentence predictions, and proposed the usage of a completely different approach to entity recognition in settings where training data are sparse.",
"In conclusion we wish to emphasize our argument that for future applications, interoperability is important. Instead of developing yet another stand-alone organizational interface with a machine learning classifier that works on limited data only, the focus should be to develop and train cross-domain and neural models that can be integrated into the backend of existing platforms. The performance of these models should be comparable on standardized datasets, evaluation scripts and leader boards.",
"The logical next step, which remains less explored in the current literature because of its complexity, is the task of predicting an RCT's included or excluded status on the basis of PICOs identified in its text. For this task, more complex architectures that include drug or intervention ontologies could be integrated. Additionally, information from already completed reviews could be re-used as training data."
],
[
"We would like to thank Clive Adams for providing testing data and feedback for this project. We thank Vincent Cheng for the Chinese translation. Furthermore, we thank the BERT team at Google Research and Allenai for making their pre-trained model weights available. Finally, we acknowledge the Huggingface team and thank them for implementing the SQuAD classes for Transformers."
],
[
"LS was funded by the National Institute for Health Research (NIHR Systematic Review Fellowship, RM-SR-2017-09-028). The views expressed in this publication are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health and Social Care."
],
[
"Scripts and supplementary material, as well as further illustrations are available from https://github.com/L-ENA/HealthINF2020. Training data for sentence classification and question answering are freely available from the cited sources.",
"Additionally, the Cochrane Schizophrenia Group extracted, annotated and made available data from studies included in over 200 systematic reviews. This aims at supporting the development of methods for reviewing tasks, and to increase the re-use of their data. These data include risk-of-bias assessment, results including all clean and published outcome data extracted by reviewers, data on PICOs, methods, and identifiers such as PubMed ID and a link to their study-based register. Additionally, a senior reviewer recently carried out a manual analysis of all 33,000 outcome names in these reviews, parsed and allocated to 15,000 unique outcomes in eight main categories BIBREF27."
]
]
}
|
{
"question": [
"What baselines did they consider?",
"What are the problems related to ambiguity in PICO sentence prediction tasks?"
],
"question_id": [
"4cbc56d0d53c4c03e459ac43e3c374b75fd48efe",
"e5a965e7a109ae17a42dd22eddbf167be47fca75"
],
"nlp_background": [
"infinity",
"infinity"
],
"topic_background": [
"familiar",
"familiar"
],
"paper_read": [
"no",
"no"
],
"search_query": [
"transformers",
"transformers"
],
"question_writer": [
"798ee385d7c8105b83b032c7acc2347588e09d61",
"798ee385d7c8105b83b032c7acc2347588e09d61"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"LSTM",
"SCIBERT"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In this work we investigate state-of-the-art methods for language modelling and sentence classification. Our contributions are centred around developing transformer-based fine-tuning approaches tailored to SR tasks. We compare our sentence classification with the LSTM baseline and evaluate the biggest set of PICO sentence data available at this point BIBREF13. We demonstrate that models based on the BERT architecture solve problems related to ambiguous sentence labels by learning to predict multiple labels reliably. Further, we show that the improved feature representation and contextualization of embeddings lead to improved performance in biomedical data extraction tasks. These fine-tuned models show promising results while providing a level of flexibility to suit reviewing tasks, such as the screening of studies for inclusion in reviews. By predicting on multilingual and full text contexts we showed that the model's capabilities for transfer learning can be useful when dealing with diverse, real-world data.",
"Figure FIGREF23 shows the same set of sentences, represented by concatenations of SCIBERT outputs. SCIBERT was chosen as an additional baseline model for fine-tuning because it provided the best representation of embedded PICO sentences. When clustered, its embeddings yielded an adjusted rand score of 0.57 for a concatenation of the two layers, compared with 0.25 for BERT-base."
],
"highlighted_evidence": [
"We compare our sentence classification with the LSTM baseline and evaluate the biggest set of PICO sentence data available at this point BIBREF13.",
"SCIBERT was chosen as an additional baseline model for fine-tuning because it provided the best representation of embedded PICO sentences. "
]
}
],
"annotation_id": [
"11ea0b3864122600cc8ab3c6e1d34caea0d87c8c"
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Some sentences are associated to ambiguous dimensions in the hidden state output",
"evidence": [
"Sentences 1 and 2 are labelled incorrectly, and clearly appear far away from the population class centroid. Sentence 3 is an example of an ambiguous case. It appears very close to the population centroid, but neither its label nor its position reflect the intervention content. This supports a need for multiple tags per sentence, and the fine-tuning of weights within the network.",
"FLOAT SELECTED: Figure 2: Visualization of training sentences using BERTbase. The x and y-axis represent the two most dominant dimensions in the hidden state output, as selected by the t-SNE algorithm. This visualization uses the sixth layer from the top, and shows three examples of labelled P sentences and their embedded positions."
],
"highlighted_evidence": [
"Sentence 3 is an example of an ambiguous case. It appears very close to the population centroid, but neither its label nor its position reflect the intervention content. ",
"FLOAT SELECTED: Figure 2: Visualization of training sentences using BERTbase. The x and y-axis represent the two most dominant dimensions in the hidden state output, as selected by the t-SNE algorithm. This visualization uses the sixth layer from the top, and shows three examples of labelled P sentences and their embedded positions."
]
}
],
"annotation_id": [
"7c2e7cb2253cdf2c28dc3ebda63e2141052f4290"
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
}
]
}
|
{
"caption": [
"Table 1: Classes for the sentence classification task.",
"Figure 1: Colour coded example for a population entity annotation, converted to SQuAD v.2 format. Combined data are used to train and evaluate the system.",
"Figure 2: Visualization of training sentences using BERTbase. The x and y-axis represent the two most dominant dimensions in the hidden state output, as selected by the t-SNE algorithm. This visualization uses the sixth layer from the top, and shows three examples of labelled P sentences and their embedded positions.",
"Figure 3: Visualisation of training sentences using SCIBERT. The x and y-axes represent the two most dominant t-SNE reduced dimensions for each concatenation of layers",
"Table 2: Summary of results for the sentence classification. task",
"Table 3: Predicting PICOs in Chinese and German. Classes were assigned based on foreign language inputs only. For reference, translations were provided by native speakers.",
"Table 4: Question Answering versus entity recognition results.",
"Table 5: Subgroups of possible sentences versus impossible sentences.",
"Table 6: This table shows two examples for intervention span predictions in QA-BERT200. On the official SQuAD development set, the same model achieved a good score, an exemplary question and prediction for this is given in the bottom row."
],
"file": [
"2-Table1-1.png",
"5-Figure1-1.png",
"6-Figure2-1.png",
"6-Figure3-1.png",
"7-Table2-1.png",
"8-Table3-1.png",
"9-Table4-1.png",
"10-Table5-1.png",
"10-Table6-1.png"
]
}
|
1706.07179
|
RelNet: End-to-End Modeling of Entities & Relations
|
We introduce RelNet: a new model for relational reasoning. RelNet is a memory augmented neural network which models entities as abstract memory slots and is equipped with an additional relational memory which models relations between all memory pairs. The model thus builds an abstract knowledge graph on the entities and relations present in a document which can then be used to answer questions about the document. It is trained end-to-end: only supervision to the model is in the form of correct answers to the questions. We test the model on the 20 bAbI question-answering tasks with 10k examples per task and find that it solves all the tasks with a mean error of 0.3%, achieving 0% error on 11 of the 20 tasks.
|
{
"section_name": [
"Introduction",
"RelNet Model",
"Related Work",
"Experiments",
"Conclusion"
],
"paragraphs": [
[
"Reasoning about entities and their relations is an important problem for achieving general artificial intelligence. Often such problems are formulated as reasoning over graph-structured representation of knowledge. Knowledge graphs, for example, consist of entities and relations between them BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Representation learning BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 and reasoning BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 with such structured representations is an important and active area of research.",
"Most previous work on knowledge representation and reasoning relies on a pipeline of natural language processing systems, often consisting of named entity extraction BIBREF12 , entity resolution and coreference BIBREF13 , relationship extraction BIBREF4 , and knowledge graph inference BIBREF14 . While this cascaded approach of using NLP systems can be effective at reasoning with knowledge bases at scale, it also leads to a problem of compounding of the error from each component sub-system. The importance of each of these sub-component on a particular downstream application is also not clear.",
"For the task of question-answering, we instead make an attempt at an end-to-end approach which directly models the entities and relations in the text as memory slots. While incorporating existing knowledge (from curated knowledge bases) for the purpose of question-answering BIBREF11 , BIBREF8 , BIBREF15 is an important area of research, we consider the simpler setting where all the information is contained within the text itself – which is the approach taken by many recent memory based neural network models BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 .",
"Recently, BIBREF17 proposed a dynamic memory based neural network for implicitly modeling the state of entities present in the text for question answering. However, this model lacks any module for relational reasoning. In response, we propose RelNet, which extends memory-augmented neural networks with a relational memory to reason about relationships between multiple entities present within the text. Our end-to-end method reads text, and writes to both memory slots and edges between them. Intuitively, the memory slots correspond to entities and the edges correspond to relationships between entities, each represented as a vector. The only supervision signal for our method comes from answering questions on the text.",
"We demonstrate the utility of the model through experiments on the bAbI tasks BIBREF18 and find that the model achieves smaller mean error across the tasks than the best previously published result BIBREF17 in the 10k examples regime and achieves 0% error on 11 of the 20 tasks."
],
[
"We describe the RelNet model in this section. Figure 1 provides a high-level view of the model. The model is sequential in nature, consisting of the following steps: read text, process it into a dynamic relational memory and then attention conditioned on the question generates the answer. We model the dynamic memory in a fashion similar to Recurrent Entity Networks BIBREF17 and then equip it with an additional relational memory.",
"There are three main components to the model: 1) input encoder 2) dynamic memory, and 3) output module. We will describe these three modules in details. The input encoder and output module implementations are similar to the Entity Network BIBREF17 and main novelty lies in the dynamic memory. We describe the operations executed by the network for a single example consisting of a document with $T$ sentences, where each sentence consists of a sequence of words represented with $K$ -dimensional word embeddings $\\lbrace e_1, \\ldots , e_N\\rbrace $ , a question on the document represented as another sequence of words and an answer to the question."
],
[
"There is a long line of work in textual question-answering systems BIBREF21 , BIBREF22 . Recent successful approaches use memory based neural networks for question answering, for example BIBREF23 , BIBREF18 , BIBREF24 , BIBREF19 , BIBREF17 . Our model is also a memory network based model and is also related to the neural turing machine BIBREF25 . As described previously, the model is closely related to the Recurrent Entity Networks model BIBREF17 which describes an end-to-end approach to model entities in text but does not directly model relations. Other approaches to question answering use external knowledge, for instance external knowledge bases BIBREF26 , BIBREF11 , BIBREF27 , BIBREF28 , BIBREF9 or external text like Wikipedia BIBREF29 , BIBREF30 .",
"Very recently, and in parallel to this work, a method for relational reasoning called relation networks BIBREF31 was proposed. They demonstrated that simple neural network modules are not as effective at relational reasoning and their proposed module is similar to our model. However, relation network is not a memory-based model and there is no mechanism to read and write relevant information for each pair. Moreover, while their approach scales as the square of the number of sentences, our approach scales as the square of the number of memory slots used per QA pair. The output module in our model can be seen as a type of relation network.",
"Representation learning and reasoning over graph structured data is also relevant to this work. Graph based neural network models BIBREF32 , BIBREF33 , BIBREF34 have been proposed which take graph data as an input. The relational memory however does not rely on a specified graph structure and such models can potentially be used for multi-hop reasoning over the relational memory. BIBREF35 proposed a method for learning a graphical representation of the text data for question answering, however the model requires explicit supervision for the graph at every step whereas RelNet does not require explicit supervision for the graph."
],
[
"We evaluate the model's performance on the bAbI tasks BIBREF18 , a collection of 20 question answering tasks which have become a benchmark for evaluating memory-augmented neural networks. We compare the performance with the Recurrent Entity Networks model (EntNet) BIBREF17 . Performance is measured in terms of mean percentage error on the tasks.",
"Training Details: We used Adam and did a grid search for the learning rate in {0.01, 0.005, 0.001} and choose a fixed learning rate of 0.005 based on performance on the validation set, and clip the gradient norm at 2. We keep all other details similar to BIBREF17 for a fair comparison. embedding dimensions were fixed to be 100, models were trained for a maximum of 250 epochs with mini-batches size of 32 for all tasks except 3 for which the batch size was 16. The document sizes were limited to most recent 70 sentences for all tasks, except for task 3 for which it was limited to 130. The RelNet models were run for 5 times with random seed on each task and the model with best validation performance was chosen as the final model. The baseline EntNet model was run for 10 times for each task BIBREF17 .",
"The results are shown in Table 1 . The RelNet model achieves a mean error of 0.285% across tasks which is better than the results of the EntNet model BIBREF17 . The RelNet model is able to achieve 0% test error on 11 of the tasks, whereas the EntNet model achieves 0% error on 7 of the tasks."
],
[
"We demonstrated an end-to-end trained neural network augmented with a structured memory representation which can reason about entities and relations for question answering. Future work will investigate the performance of these models on more real world datasets, interpreting what the models learn, and scaling these models to answer questions about entities and relations from reading massive text corpora."
]
]
}
|
{
"question": [
"How is knowledge retrieved in the memory?",
"How is knowledge stored in the memory?",
"What are the relative improvements observed over existing methods?",
"What is the architecture of the neural network?",
"What methods is RelNet compared to?"
],
"question_id": [
"082c88e132b4f1bf68abdc3a21ac4af180de1113",
"74091e10f596428135b0ab06008608e09c051565",
"43b4f7eade7a9bcfaf9cc0edba921a41d6036e9c",
"a75861e6dd72d69fdf77ebd81c78d26c6f7d0864",
"60fd7ef7986a5752b31d3bd12bbc7da6843547a4"
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
"",
""
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"the memory slots correspond to entities and the edges correspond to relationships between entities, each represented as a vector."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Recently, BIBREF17 proposed a dynamic memory based neural network for implicitly modeling the state of entities present in the text for question answering. However, this model lacks any module for relational reasoning. In response, we propose RelNet, which extends memory-augmented neural networks with a relational memory to reason about relationships between multiple entities present within the text. Our end-to-end method reads text, and writes to both memory slots and edges between them. Intuitively, the memory slots correspond to entities and the edges correspond to relationships between entities, each represented as a vector. The only supervision signal for our method comes from answering questions on the text."
],
"highlighted_evidence": [
"Our end-to-end method reads text, and writes to both memory slots and edges between them. Intuitively, the memory slots correspond to entities and the edges correspond to relationships between entities, each represented as a vector."
]
}
],
"annotation_id": [
"a5d0953d56d8cd11ea834da09e2416aee83102ea"
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "entity memory and relational memory.",
"evidence": [
"There are three main components to the model: 1) input encoder 2) dynamic memory, and 3) output module. We will describe these three modules in details. The input encoder and output module implementations are similar to the Entity Network BIBREF17 and main novelty lies in the dynamic memory. We describe the operations executed by the network for a single example consisting of a document with $T$ sentences, where each sentence consists of a sequence of words represented with $K$ -dimensional word embeddings $\\lbrace e_1, \\ldots , e_N\\rbrace $ , a question on the document represented as another sequence of words and an answer to the question."
],
"highlighted_evidence": [
"There are three main components to the model: 1) input encoder 2) dynamic memory, and 3) output module. We will describe these three modules in details. The input encoder and output module implementations are similar to the Entity Network BIBREF17 and main novelty lies in the dynamic memory. We describe the operations executed by the network for a single example consisting of a document with $T$ sentences, where each sentence consists of a sequence of words represented with $K$ -dimensional word embeddings $\\lbrace e_1, \\ldots , e_N\\rbrace $ , a question on the document represented as another sequence of words and an answer to the question."
]
}
],
"annotation_id": [
"48d2fcec8e2a7967bf3f1ab2c12b0e95c778fd7e"
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"The RelNet model achieves a mean error of 0.285% across tasks which is better than the results of the EntNet model BIBREF17 . The RelNet model is able to achieve 0% test error on 11 of the tasks, whereas the EntNet model achieves 0% error on 7 of the tasks."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The results are shown in Table 1 . The RelNet model achieves a mean error of 0.285% across tasks which is better than the results of the EntNet model BIBREF17 . The RelNet model is able to achieve 0% test error on 11 of the tasks, whereas the EntNet model achieves 0% error on 7 of the tasks."
],
"highlighted_evidence": [
" The RelNet model achieves a mean error of 0.285% across tasks which is better than the results of the EntNet model BIBREF17 . The RelNet model is able to achieve 0% test error on 11 of the tasks, whereas the EntNet model achieves 0% error on 7 of the tasks.",
"The RelNet model achieves a mean error of 0.285% across tasks which is better than the results of the EntNet model BIBREF17 . The RelNet model is able to achieve 0% test error on 11 of the tasks, whereas the EntNet model achieves 0% error on 7 of the tasks."
]
}
],
"annotation_id": [
"7090d01d80d3d73861302db34a0bea96bcc9af89"
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"extends memory-augmented neural networks with a relational memory to reason about relationships between multiple entities present within the text. ",
"The model is sequential in nature, consisting of the following steps: read text, process it into a dynamic relational memory and then attention conditioned on the question generates the answer. We model the dynamic memory in a fashion similar to Recurrent Entity Networks BIBREF17 and then equip it with an additional relational memory."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We describe the RelNet model in this section. Figure 1 provides a high-level view of the model. The model is sequential in nature, consisting of the following steps: read text, process it into a dynamic relational memory and then attention conditioned on the question generates the answer. We model the dynamic memory in a fashion similar to Recurrent Entity Networks BIBREF17 and then equip it with an additional relational memory."
],
"highlighted_evidence": [
"The model is sequential in nature, consisting of the following steps: read text, process it into a dynamic relational memory and then attention conditioned on the question generates the answer. We model the dynamic memory in a fashion similar to Recurrent Entity Networks BIBREF17 and then equip it with an additional relational memory."
]
}
],
"annotation_id": [
"121f0702a2eab76c1ad0119ac520adc61edd716c"
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"We compare the performance with the Recurrent Entity Networks model (EntNet) BIBREF17"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We evaluate the model's performance on the bAbI tasks BIBREF18 , a collection of 20 question answering tasks which have become a benchmark for evaluating memory-augmented neural networks. We compare the performance with the Recurrent Entity Networks model (EntNet) BIBREF17 . Performance is measured in terms of mean percentage error on the tasks.",
"The results are shown in Table 1 . The RelNet model achieves a mean error of 0.285% across tasks which is better than the results of the EntNet model BIBREF17 . The RelNet model is able to achieve 0% test error on 11 of the tasks, whereas the EntNet model achieves 0% error on 7 of the tasks."
],
"highlighted_evidence": [
"We compare the performance with the Recurrent Entity Networks model (EntNet) BIBREF17 .",
" The RelNet model achieves a mean error of 0.285% across tasks which is better than the results of the EntNet model BIBREF17 . The RelNet model is able to achieve 0% test error on 11 of the tasks, whereas the EntNet model achieves 0% error on 7 of the tasks."
]
}
],
"annotation_id": [
"bd36e3e626f515050572af1723aa2049868fe1ec"
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
}
]
}
|
{
"caption": [
"Figure 1: RelNet Model: The model represents the state of the world as a neural turing machine with relational memory. At each time step, the model reads the sentence into an encoding vector and updates both entity memories and all edges between them representing the relations.",
"Table 1: Mean % Error on the 20 Babi tasks."
],
"file": [
"2-Figure1-1.png",
"4-Table1-1.png"
]
}
|
1909.08824
|
Modeling Event Background for If-Then Commonsense Reasoning Using Context-aware Variational Autoencoder
|
Understanding event and event-centered commonsense reasoning are crucial for natural language processing (NLP). Given an observed event, it is trivial for human to infer its intents and effects, while this type of If-Then reasoning still remains challenging for NLP systems. To facilitate this, a If-Then commonsense reasoning dataset Atomic is proposed, together with an RNN-based Seq2Seq model to conduct such reasoning. However, two fundamental problems still need to be addressed: first, the intents of an event may be multiple, while the generations of RNN-based Seq2Seq models are always semantically close; second, external knowledge of the event background may be necessary for understanding events and conducting the If-Then reasoning. To address these issues, we propose a novel context-aware variational autoencoder effectively learning event background information to guide the If-Then reasoning. Experimental results show that our approach improves the accuracy and diversity of inferences compared with state-of-the-art baseline methods.
|
{
"section_name": [
"Introduction",
"Background",
"Context-aware Variational Autoencoder",
"Context-aware Variational Autoencoder ::: Architecture of CWVAE",
"Context-aware Variational Autoencoder ::: Optimizing",
"Context-aware Variational Autoencoder ::: Training Details",
"Experiments ::: Auxiliary Dataset",
"Experiments ::: Baselines",
"Experiments ::: Evaluation Metrics ::: Automatic Evaluation",
"Experiments ::: Evaluation Metrics ::: Human Evaluation",
"Experiments ::: Overall Results",
"Experiments ::: Case Study",
"Related Work ::: Event-Centered Commonsense Reasoning",
"Related Work ::: Variational AutoEncoder-Decoder Based Natural Language Generation",
"Conclusion",
"Acknowledgments"
],
"paragraphs": [
[
"Recently, event-centered commonsense knowledge has attracted much attention BIBREF0, BIBREF1, BIBREF2, BIBREF3, because of understanding events is an important component of NLP. Given a daily-life event, human can easily understand it and reason about its causes, effects, and so on. However, it still remains a challenging task for NLP systems. This is partly due to most of them are trained for task-specific datasets or objectives, which results in models that are adapt at finding task-specific underlying correlation patterns but have limited capability in simple and explainable commonsense reasoning BIBREF4.",
"To facilitate this, BIBREF5 (BIBREF5) build the Event2Mind dataset and BIBREF4 (BIBREF4) present the Atomic dataset, mainly focus on nine If-Then reasoning types to describe causes, effects, intents and participant characteristic about events. Together with these datasets, a simple RNN-based encoder-decoder framework is proposed to conduct the If-Then reasoning.",
"However, there still remains two challenging problems. First, as illustrated in Figure FIGREF1, given an event “PersonX finds a new job”, the plausible feeling of PersonX about that event could be multiple (such as “needy/stressed out” and “relieved/joyful”). Previous work showed that for the one-to-many problem, conventional RNN-based encoder-decoder models tend to generate generic responses, rather than meaningful and specific answers BIBREF6, BIBREF7.",
"Second, as a commonsense reasoning problem, rich background knowledge is necessary for generating reasonable inferences. For example, as shown in Figure FIGREF1, the feeling of PersonX upon the event “PersonX finds a new job” could be multiple. However, after given a context “PersonX was fired”, the plausible inferences would be narrowed down to “needy” or “stressed out”.",
"To better solve these problems, we propose a context-aware variational autoencoder (CWVAE) together with a two-stage training procedure. Variational Autoencoder (VAE) based models have shown great potential in modeling the one-to-many problem and generate diversified inferences BIBREF8, BIBREF9.",
"In addition to the traditional VAE structure, we introduces an extra context-aware latent variable in CWVAE to learn the event background knowledge. In the pretrain stage, CWVAE is trained on an auxiliary dataset (consists of three narrative story corpora and contains rich event background knowledge), to learn the event background information by using the context-aware latent variable. Subsequently, in the finetune stage, CWVAE is trained on the task-specific dataset to adapt the event background information to each specific aspect of If-Then inferential target (e.g., intents, reactions, etc.).",
"Experiments on the Event2Mind and Atomic dataset show that our proposed approach outperforms baseline methods in both the accuracy and diversity of inferences. The code is released at https://github.com/sjcfr/CWVAE."
],
[
"Before specifically describing two dataset —- Event2Mind and Atomic used in this paper as well as the If-Then reasoning task, for clarity, we define the following terminologies:",
"Base event: the prerequisite event in If-Then reasoning, organized as a verb phrase with a predicate and its arguments, such as the event “PersonX finds a new job” shown in Figure FIGREF1.",
"Inference dimension: a particular If-Then reasoning type, e.g., intents, effects of the base event. Details are shown in Table TABREF2 and Table TABREF3.",
"Target: the inferential results. For example, as shown in Figure FIGREF1, given a base event “PersonX finds a new job” and one inference dimension “xReact”, the targets could be “relieved” or “needy”. Notice that each inference dimension can have multiple targets.",
"Event2Mind Dataset contains 25K base events and 300K targets, annotated through crowdsourcing. Event2Mind is organized in a hierarchical form: each base event has three types of inference dimensions, and given a base event, under one of inference dimensions, several targets may simultaneously exist. Table TABREF2 shows the (base event-inference dimension-target) hierarchical structure through an example from Event2Mind.",
"Atomic Dataset Inspired by Event2Mind, the Atomic dataset shares the same hierarchical structure as Event2Mind, while scales up the size of dataset and expands the scope to nine types of inference dimensions. Table TABREF3 shows the (base event-inference dimension-target) hierarchical structure through an example from Atomic. Though Atomic covers the inference dimensions of Event2Mind, the base event collection of Event2Mind is nonidentical to that of Atomic.",
"Problem Definition The If-Then reasoning task could be formally defined as a conditional one-to-many generation problem: given a base event $x$ and one inference dimension $d$, the model is required to generate targets $y=f(x, d)$ as close to the ground truths as possible. Both $x$ and $y$ consist of sequence of words: $x=\\lbrace x_1,\\dots , x_{m}\\rbrace $, and $y=\\lbrace y_1,\\dots , y_{n}\\rbrace $, where $m$ and $n$ denotes the length of $x$ and $y$, respectively.",
"Conditional Variational Autoencoder The variational autoencoder (VAE) defines a generative framework suited for one-to-many generation problem BIBREF10. While conditional variational autoencoder (CVAE) BIBREF11 is an extension of VAE on the conditional generation problem. As shown in Figure FIGREF5 (a), CVAE characterizes the conditional one-to-many generation problem using three random variables: event $x$, target $y$ and a latent variable $z$, which is used for modeling the latent distribution of semantic over targets given an event. Hence, under a certain inference dimension, with regard to the latent semantic variable $z$, the conditional generation problem could be expressed as $p(y|x)=\\int p(y|x,z)p(z|x)dz$. CVAE models $p(y|x,z)$ and $p(z|x)$ using deep neural networks (parameterized by $\\theta $) $p_{\\theta }(y|x,z)$ and $p_{\\theta }(z|x)$. Then as illustrated in Figure FIGREF5 (b), $y$ could be generated from $x$ and $z$.",
"CVAE is trained to maximize the conditional likelihood $p(y|x)$, which involves an intractable marginalization over the latent variable $z$. Instead, following BIBREF10 (BIBREF10), a practical way is to introduce another deep network (parameterized by $\\phi $) $q_{\\phi }(z|x,y)$ to approximate the true posterior distribution $p(z|x,y)$ and maximize the evidence lower bound (ELBO) of the log-likelihood function:",
"Therefore, CVAE is composed of three neural networks in general. We refer to $p_{\\theta }(z|x)$ as a prior network, $q_{\\phi }(z|x,y)$ as a recognition network, and $p_{\\theta }(y|x,z)$ as a neural decoder."
],
[
"Traditional CVAE can model the event-target relation. In other words, given an observed event, CVAE can generate its corresponding targets. While in this paper we model the If-Then reasoning as a [(background), event]-target process. It means that in addition to the observed event, we also want to involve the event background knowledge (which can be learned from event contexts) to generate the reasonable targets.",
"To this end, we propose a context-aware variational autoencoder (CWVAE), with two additional latent variables: a context-acquiring latent variable $z_c$ to directly acquire context information, and a context-aware latent variable $z_{c^{\\prime }}$ to learn background knowledge from $z_c$, as shown in Figure FIGREF6 (a). However, the event context information is absent in the Event2Mind and Atomic dataset. To learn from the external event context information, we design the following two-stage training procedure for CWVAE.",
"Pretrain: Learning Event Background Knowledge from Auxiliary Dataset In the pretrain stage, CWVAE is trained on three narrative story corpora with rich event context information. As shown in Figure FIGREF6 (a), context-acquiring latent variable $z_c$ is directly conditioned on the context $c$. Hence, $z_c$ could be employed for acquiring background knowledge from event contexts. Then, we minimize the distance between $z_c$ and the context-aware latent variable $z_{c^{\\prime }}$, by which the event background knowledge is transferred from $z_c$ to $z_{c^{\\prime }}$.",
"Finetune: Adapt Event Background Knowledge to Each Inference Dimension In the finetune stage, as shown in Figure FIGREF6 (b), CWVAE is trained on the Event2Mind and Atomic dataset without the event context information. Pretrained CWVAE is finetuned to learn the specific inferential knowledge of each inference dimension. After the training procedure, as shown in Figure FIGREF6 (c), samples of $z$ is generated based on $x$ and samples of $z_{c^{\\prime }}$, where $z_{c^{\\prime }}$ contains rich event background knowledge helpful for If-Then reasoning."
],
[
"As shown in Figure FIGREF8, CWVAE is mainly composed of four parts: a neural encoder that provides distributed representations of base events/targets, a recognition network for inferring $q_{\\phi }(z|x,y)$, $q_{\\phi }(z_c|x,c)$ and $q_{\\phi }(z|z_{c^{\\prime }}, x)$, a prior network for modeling $p_{\\theta }(z_{c^{\\prime }}|x)$ and $p_{\\theta }(z|x, z_{c^{\\prime }})$, and a neural decoder that integrates the information from $z$ and $z_{c^{\\prime }}$ to generate targets.",
"Neural Encoder We employ a bidirectional GRU as neural encoder, which encodes context $c$, event $x$ and target $y$ into distributed representations $h^c=\\lbrace h_1^c,\\dots ,h_{l_c}^c\\rbrace $, $h^x=\\lbrace h_1^x,\\dots ,h_{l_x}^x\\rbrace $ and $h^y=\\lbrace h_1^y,\\dots ,h_{l_y}^y\\rbrace $, where $l_c$, $l_x$ and $l_y$ is the length of $c$, $x$ and $y$, respectively.",
"Recognition Network The recognition network models $q_{\\phi }(z|x,y)$, $q_{\\phi }(z_c|x,c)$, $q_{\\phi }(z|z_{c^{\\prime }}, x)$ based on $h^x$, $h^y$ and $h^c$.",
"Following traditional VAE, the above-mentioned three distributions are assumed to be multivariate Gaussian distribution with a diagonal covariance structure:",
"where $\\mu $ denotes the mean of the distribution, $\\sigma $ denotes the standard deviation of the distribution, and $I$ denotes the identity matrix.",
"Given $h^x$, $h^y$ and $h^c$, we propose a novel attention-based inferer (ABI) module to estimate the mean and standard deviation of $q_{\\phi }(z_{c}|x,c)$, $q_{\\phi }(z_{c^{\\prime }}|x,y)$ and $q_{\\phi }(z|x,y)$:",
"Briefly, through the attention mechanism, ABI can capture the semantic interaction between input sequences, and estimate the parameters of distributions based on it. We will introduce the specific structure of ABI in below.",
"Prior Network Prior Network models $p_{\\theta }(z_{c^{\\prime }}|x)$ and $p_{\\theta }(z|x, z_{c^{\\prime }})$ based on $h^x$. The distribution of $p_{\\theta }(z_{c^{\\prime }}|x)$ and $p_{\\theta }(z|x, z_{c^{\\prime }})$ are still assumed to be multivariate Gaussian, whereas the parameters are different:",
"where $\\mu ^{^{\\prime }}$ denotes the mean of the distribution, $\\sigma ^{^{\\prime }}$ denotes the standard deviation of the distribution and $I$ denotes the identity matrix.",
"Then the attention-based inferer module is still employed to estimate parameters of distributions:",
"Neural Decoder Given the base event $x$, the semantic latent variable $z$, and the context-aware latent variable $z_{c^{\\prime }}$, the neural decoder defines the generation probability of $y$ as following:",
"where $p(y_j|y<j, z, z_{c^{\\prime }}, x)=g(y_{j-1}, s_{j-1}, e_j)$, $g(\\cdot )$ is an attention-based feed forward model, $e_j=\\sum _i \\alpha _{ji}h_i^{x}$ is the context vector and $s_{j-1}$ is the hidden state of the decoder. We obtain $g(\\cdot )$ and $e_j$ the same way as BIBREF12 (BIBREF12). Whereas our decoder differs from BIBREF12 (BIBREF12) in that our model integrates the context-aware latent variable $z_{c^{\\prime }}$ and semantic latent variable $z$ in the computation of $s_j=\\mathrm {GRU}([E_{yj};s_{j-1},z,z_{j-1}])$, where $E_{yj}$ is the word embeddings of target words.",
"Note that through concatenating $z$ and $z_{c^{\\prime }}$ with $E_{yj}$ and $s_{j-1}$, $s_j$ could be affected by context-aware latent variable $z_{c^{\\prime }}$ and semantic latent variable $z$. This allows model to directly access to the event background knowledge from $z_{c^{\\prime }}$. In addition, the randomness of $z$ and $z_{c^{\\prime }}$ would increase the diversity of model generation.",
"Attention-based Inferer Attention mechanism has shown strong ability in capturing semantic interactions BIBREF13. Inspired by the co-attention mechanism BIBREF14, we propose an attention-based inferer (ABI) to estimate the mean and standard deviation of a distribution belongs to $p_{\\theta }(\\cdot )$ or $q_{\\phi }(\\cdot )$ by capturing semantic interactions of input sequences.",
"Specifically, given two input sequences (e.g., representations of contexts and events) $a=\\lbrace a_1,\\dots ,a_{l_a}\\rbrace $ and $b=\\lbrace b_1,\\dots ,b_{l_b}\\rbrace $ with length $l_a$ and $l_b$, we first obtain the attention scores from each side through:",
"where $W_a \\in \\mathbb {R}^{d\\times d_a}$ and $W_b \\in \\mathbb {R}^{d\\times d_b}$ are parameter weights.",
"With these attention scores, the context vectors of both sequences are given by:",
"Then we perform a mean pooling operation on context vectors of both sequences:",
"To obtain the mean and standard deviation, the pooled context vectors $\\bar{c^a}$ and $\\bar{c^b}$ which carry semantic interaction between two sequences, are concatenated and projected into a latent semantic space through a nonlinear transformation:",
"Finally the mean and standard deviation are generated through a nonlinear transformation over $h_z$:"
],
[
"With the incorporation of $z_{c^{\\prime }}$, the original loglikelihood could be decomposed as:",
"Then following traditional CVAE, the ELBO of CWVAE is defined as follows:",
"which is the objective function at the finetune stage.",
"While in the pretrain stage, as we aim to learn background knowledge through minimizing the distance between $z_c$ and $z_{c^{\\prime }}$, in addition to $L^{ELBO}$, a context-aware regulation term is introduced:",
"where the context aware regularization term is the KL distance between $z$ and $z_{c^{\\prime }}$. Through minimizing the context aware regularization term, we aim to pass event context knowledge from $z_c$ to the context aware latent variable $z_{c^{\\prime }}$."
],
[
"To test the performance of CWVAE, we split the Event2Mind and Atomic dataset into training, development and test sets (80%, 10%, 10%) in the same way as BIBREF5 (BIBREF5) and BIBREF4 (BIBREF4), respectively. We initialize the embedding layer from 300d GloVe word embeddings. The neural encoder is chosen to be biGRU with 300 hidden units. For the ABI module, size of $W_a$ and $W_b$ is set to be $100 \\times d_a$ and $100 \\times d_b$ respectively. The dimension of $z_c$, $z_{c^{\\prime }}$ and $z$ is all set as 40. The neural decoder is set to be GRU with 300d hidden state. Regulation coefficient $\\lambda $ of context-aware regulation term is set to be 0.1. Models are trained using an Adam optimizer BIBREF15 with a learning rate of 0.001."
],
[
"The auxiliary dataset is built upon three human-written story corpora: ROCStories BIBREF16, VIST BIBREF17 and WritingPrompts BIBREF18. ROCStories and VIST are composed of short stories with five sentences. We filter out stories of more than 1,000 words in WritingPrompts, and cut the remaining stories into five-sentence-paragraphs.",
"For each five-sentence-paragraph, we define the first three sentences as contexts of the base event, the fourth sentence as the base event, and the fifth sentence as the inference target. For example, as shown in Table TABREF25, the first three sentences describe a context that Jason was unsatisfied about his job and applied for a new job. Hence, after happening the event “he got the job”, a plausible react about the event could be “jason was much happier at his new job”. In total, the auxiliary dataset contains 192,316 $(context, event, target)$ triples."
],
[
"We compared our proposed model with the following four baseline methods:",
"RNN-based Seq2Seq proposed by BIBREF4 (BIBREF4) for the If-Then reasoning on Atomic.",
"Variational Seq2Seq combines a latent variable with the encoder-decoder structure through converting the last hidden state of RNN encoder into a Gaussian distributed latent variable BIBREF8.",
"VRNMT Propose by BIBREF19 (BIBREF19), VRNMT combines CVAE with attention-based encoder-decoder framework through introduces a latent variable to model the semantic distribution of targets.",
"CWVAE-Unpretrained refers to the CWVAE model without the pretrain stage.",
"Note that, for each baseline method, we train distinct models for each distinct inference dimension, respectively."
],
[
"We first compare the perplexity of CWVAE with baseline methods. Perplexity measures the probability of model to regenerate the exact targets, which is particular suitable for evaluating the model performance on one-to-many problem BIBREF20. Further, we employ BLEU score to evaluate the accuracy of generations BIBREF21, and the number of distinct n-gram to evaluate the diversity of generations BIBREF6. The distinct is normalized to $[0, 1]$ by dividing the total number of generated tokens."
],
[
"Since automatic evaluation of generations is still a challenging task BIBREF22, we also conduct human evaluations on the model performance. Five human experts are employed to evaluate the coherence, diversity and fluency of generated targets. Experts are asked to vote for if a generation is fluent or coherent for each generated target, and give a 1-5 score for the diversity of generations. For both Event2Mind and Atomic datasets, 100 events are randomly selected from the test set. For each method, top 10 generated targets of each base event are used for evaluation. Finally we report three overall averaged scores of coherence, diversity and fluency on both datasets, respectively."
],
[
"We list the perplexity and BLEU score of CWVAE and baseline methods on Event2Mind and Atomic in Table TABREF31 and Table TABREF33, respectively, and show the distinct-1 and distinct-2 score on Event2Mind and Atomic in Table TABREF32 and Table TABREF34, respectively. We find that:",
"(1) As shown in Table TABREF32 and Table TABREF34, comparison between RNN-based Seq2Seq and variational-based methods, including Variational Seq2Seq, VRNMT, CWVAE-unpretrained and CWVAE shows that, variational-based methods could increase the diversity of generations. This confirms one of our motivations that variational-based methods could capture the latent semantic distribution within targets and increase the diversity of If-Then reasoning.",
"(2) Comparing CWVAE-unpretrained with other baseline methods shows that, in general CWVAE improves the accuracy and diversity on both dataset. These results indicate the efficiency of CWVAE in capturing the latent semantic distribution of targets, and generate more reasonable inferential results.",
"(3) Comparison between CWVAE and CWVAE-unpretrained shows that the pretrain stage could enhance the performance of CWVAE in both the accuracy and diversity. This is mainly because event knowledge could offer the guidance for If-Then reasoning. In the pretrain stage, CWVAE could capture the event background knowledge through context-aware latent variable, and such knowledge could be be adapted to our task through the fintune stage.",
"To further evaluate the effectiveness of our proposed approach, we also conduct human evaluations, the results of which are shown in Table TABREF39 and Table TABREF40. On both datasets, CWVAE-based methods achieve consistent better coherence, diversity and fluency performances. While comparing with CWVAE-Unpretrained, the pretrain procedure could improves the performance on coherence and fluency. The main reasons are twofold: first, the CWVAE has advantage in capturing the semantic distribution of targets; second, event background learned from the pretrain stage is helpful for the If-Then reasoning."
],
[
"Table TABREF41 provides an example of model generations given the base event “PersonX works tirelessly” and the inference dimension “xIntent”. The generations under CWVAE mainly contain four kinds of semantics: (1) be productive, (2) finish his work soon, (3) accomplish goal, (4) earn more money. While the semantics of generations using baseline RNN-based Seq2Seq model is relatively limited. Furthermore, the first three kinds of semantic overlap the three ground truth targets, and the fourth kind of semantic is in accordance with daily-life commonsense. Compared to RNN-based Seq2Seq model, our approach can increase the diversity and rationality of generations, meanwhile keep the accuracy."
],
[
"Understanding events and constructing event-centered commonsense knowledge are crucial to many NLP applications, such as intention recognition BIBREF23 and dialog generation BIBREF24. Recently a growing number of studies focus on event-centered commonsense reasoning, which mainly concentrates on two areas, script event prediction and story ending generation/choosing.",
"Script event prediction concerns with the temporal relationships between script events BIBREF25, which requires models to choose a correct subsequent triple-organized event among the candidates BIBREF2. Prior work mainly focused on modeling event pairs BIBREF25, event chains BIBREF2 and event graph BIBREF3 to predict the subsequent event. Story ending generation focuses on generating plausible story endings BIBREF16, which requires models to understand the story context, and keep generated endings logically consistent with it BIBREF26, BIBREF27. The above tasks mainly investigate the logical orders of events, whereas the If-Then reasoning task focuses on inferring the mental state of event participants."
],
[
"VAE BIBREF10 has been widely applied in various of text generation tasks, such as dialogue and machine translation. In dialogue generation, BIBREF9 (BIBREF9) adapts VAE with encoder-decoder framework to model the latent semantic distribution of answers, which can increase the diversity of generations. For the task of machine translation, BIBREF19 (BIBREF19) and BIBREF28 (BIBREF28) employ a latent variable to capture the semantic interaction between the source and target sentence, and regard the latent variable as a supplementation of attention mechanism. While BIBREF29 (BIBREF29) use the latent variable to model topic distributions in text generation. In this paper, we introduce an additional context-aware latent variable to effectively learn background knowledge and conduct If-Then reasoning on the guidance of it."
],
[
"In this paper, we propose a novel context-aware VAE (CWVAE) framework with two training stages for If-Then commonsense reasoning. By introducing an additional context-aware latent variable, CWVAE is able to learn external background knowledge, and conduct If-Then reasoning under its guidance. In the pretrain stage, CWVAE learns event background knowledge, then in the finetune stage CWVAE adapts such knowledge to each inference dimension. Experimental results demonstrate that CWVAE outperforms baseline methods in both the accuracy and diversity of generations."
],
[
"We thank the anonymous reviewers for their constructive comments, and gratefully acknowledge the support of the National Key Research and Development Program of China (SQ2018AAA010010), the National Key Research and Development Program of China (2018YFB1005103), the National Natural Science Foundation of China (NSFC) via Grant 61702137."
]
]
}
|
{
"question": [
"How do they measure the diversity of inferences?",
"By how much do they improve the accuracy of inferences over state-of-the-art methods?",
"Which models do they use as baselines on the Atomic dataset?",
"How does the context-aware variational autoencoder learn event background information?",
"What is the size of the Atomic dataset?"
],
"question_id": [
"7d59374d9301a0c09ea5d023a22ceb6ce07fb490",
"8e2b125426d1220691cceaeaf1875f76a6049cbd",
"42bc4e0cd0f3e238a4891142f1b84ebcd6594bf1",
"fb76e994e2e3fa129f1e94f1b043b274af8fb84c",
"99ef97336c0112d9f60df108f58c8b04b519a854"
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity",
"infinity"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"search_query": [
" ",
" ",
" ",
" ",
" "
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "by number of distinct n-grams",
"evidence": [
"We first compare the perplexity of CWVAE with baseline methods. Perplexity measures the probability of model to regenerate the exact targets, which is particular suitable for evaluating the model performance on one-to-many problem BIBREF20. Further, we employ BLEU score to evaluate the accuracy of generations BIBREF21, and the number of distinct n-gram to evaluate the diversity of generations BIBREF6. The distinct is normalized to $[0, 1]$ by dividing the total number of generated tokens."
],
"highlighted_evidence": [
"Further, we employ BLEU score to evaluate the accuracy of generations BIBREF21, and the number of distinct n-gram to evaluate the diversity of generations BIBREF6. "
]
}
],
"annotation_id": [
"7f7d9a78c51f1de52959ee1634d8d01fc56c9efd"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "ON Event2Mind, the accuracy of proposed method is improved by absolute BLUE 2.9, 10.87, 1.79 for xIntent, xReact and oReact respectively.\nOn Atomic dataset, the accuracy of proposed method is improved by absolute BLUE 3.95. 4.11, 4.49 for xIntent, xReact and oReact.respectively.",
"evidence": [
"We first compare the perplexity of CWVAE with baseline methods. Perplexity measures the probability of model to regenerate the exact targets, which is particular suitable for evaluating the model performance on one-to-many problem BIBREF20. Further, we employ BLEU score to evaluate the accuracy of generations BIBREF21, and the number of distinct n-gram to evaluate the diversity of generations BIBREF6. The distinct is normalized to $[0, 1]$ by dividing the total number of generated tokens.",
"FLOAT SELECTED: Table 4: Average perplexity and BLEU score (reported in percentages) for the top 10 generations under each inference dimension of Event2Mind. The the best result for each dimension is emboldened.",
"FLOAT SELECTED: Table 6: Average perplexity and BLEU scores (reported in percentages) for the top 10 generations under each inference dimension of Atomic. The the best result for each dimension is emboldened."
],
"highlighted_evidence": [
"Further, we employ BLEU score to evaluate the accuracy of generations BIBREF21, and the number of distinct n-gram to evaluate the diversity of generations BIBREF6. ",
"FLOAT SELECTED: Table 4: Average perplexity and BLEU score (reported in percentages) for the top 10 generations under each inference dimension of Event2Mind. The the best result for each dimension is emboldened.",
"FLOAT SELECTED: Table 6: Average perplexity and BLEU scores (reported in percentages) for the top 10 generations under each inference dimension of Atomic. The the best result for each dimension is emboldened."
]
}
],
"annotation_id": [
"5f5d24e05be705e9487a2032e7c9a8e3c69d41d7"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"RNN-based Seq2Seq",
"Variational Seq2Seq",
"VRNMT ",
"CWVAE-Unpretrained"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We compared our proposed model with the following four baseline methods:",
"RNN-based Seq2Seq proposed by BIBREF4 (BIBREF4) for the If-Then reasoning on Atomic.",
"Variational Seq2Seq combines a latent variable with the encoder-decoder structure through converting the last hidden state of RNN encoder into a Gaussian distributed latent variable BIBREF8.",
"VRNMT Propose by BIBREF19 (BIBREF19), VRNMT combines CVAE with attention-based encoder-decoder framework through introduces a latent variable to model the semantic distribution of targets.",
"CWVAE-Unpretrained refers to the CWVAE model without the pretrain stage.",
"Note that, for each baseline method, we train distinct models for each distinct inference dimension, respectively.",
"FLOAT SELECTED: Table 6: Average perplexity and BLEU scores (reported in percentages) for the top 10 generations under each inference dimension of Atomic. The the best result for each dimension is emboldened."
],
"highlighted_evidence": [
"We compared our proposed model with the following four baseline methods:\n\nRNN-based Seq2Seq proposed by BIBREF4 (BIBREF4) for the If-Then reasoning on Atomic.\n\nVariational Seq2Seq combines a latent variable with the encoder-decoder structure through converting the last hidden state of RNN encoder into a Gaussian distributed latent variable BIBREF8.\n\nVRNMT Propose by BIBREF19 (BIBREF19), VRNMT combines CVAE with attention-based encoder-decoder framework through introduces a latent variable to model the semantic distribution of targets.\n\nCWVAE-Unpretrained refers to the CWVAE model without the pretrain stage.\n\nNote that, for each baseline method, we train distinct models for each distinct inference dimension, respectively.",
"FLOAT SELECTED: Table 6: Average perplexity and BLEU scores (reported in percentages) for the top 10 generations under each inference dimension of Atomic. The the best result for each dimension is emboldened."
]
}
],
"annotation_id": [
"667d47b73133321cfe695db94c2418e8b8c4d9bb"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": " CWVAE is trained on an auxiliary dataset to learn the event background information by using the context-aware latent variable. Then, in finetute stage, CWVAE is trained on the task-specific dataset to adapt the event background information to each specific aspect of If-Then inferential target.",
"evidence": [
"In addition to the traditional VAE structure, we introduces an extra context-aware latent variable in CWVAE to learn the event background knowledge. In the pretrain stage, CWVAE is trained on an auxiliary dataset (consists of three narrative story corpora and contains rich event background knowledge), to learn the event background information by using the context-aware latent variable. Subsequently, in the finetune stage, CWVAE is trained on the task-specific dataset to adapt the event background information to each specific aspect of If-Then inferential target (e.g., intents, reactions, etc.)."
],
"highlighted_evidence": [
"In addition to the traditional VAE structure, we introduces an extra context-aware latent variable in CWVAE to learn the event background knowledge. In the pretrain stage, CWVAE is trained on an auxiliary dataset (consists of three narrative story corpora and contains rich event background knowledge), to learn the event background information by using the context-aware latent variable. Subsequently, in the finetune stage, CWVAE is trained on the task-specific dataset to adapt the event background information to each specific aspect of If-Then inferential target (e.g., intents, reactions, etc.)."
]
}
],
"annotation_id": [
"d01baf34ae2b5ff6b706bad6ad645c4da7d42d1b"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"122017054a7e7b46d0ad276b7a3e5abd76b463ba"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
]
}
|
{
"caption": [
"Figure 1: A illustration of two challenging problems in IfThen reasoning. (a) Given an observed event, the feelings about this event could be multiple. (b) Background knowledge is need for generating reasonable inferences, which is absent in the dataset (marked by dashed lines).",
"Table 1: Hierarchical structure of Event2Mind dataset. For specific inference dimensions, “x” and “o” refers to PersonX and others respectively.",
"Table 2: Hierarchical structure of Atomic dataset. For specific inference dimensions, “x” and “o” refers to PersonX and others respectively.",
"Figure 2: Illustration of inference and generation process of CVAE in a directed graph. Dashed lines represent the inference of z. Solid lines represent the generation process.",
"Figure 3: Illustration of pretrain, finetune and generation process of CWVAE in a directed graph. Dashed lines represent the inference of z, zc and zc′ . Solid lines represent the generation process. Red circle denotes the context-aware latent variable.",
"Figure 4: Architecture of CWVAE. We mark Neural encoder in green, prior network in blue, recognition network in brown and neural decoder in orange, respectively.",
"Table 3: An example for the construction of auxiliary dataset. For a five-sentence-paragraph, the first three sentences are taken as event context, while the fourth and fifth sentence is taken as base event and target respectively.",
"Table 4: Average perplexity and BLEU score (reported in percentages) for the top 10 generations under each inference dimension of Event2Mind. The the best result for each dimension is emboldened.",
"Table 5: Distinct-1 and distinct-2 scores for the top 10 generations under each inference dimension of Event2Mind. The the best result for each dimension is emboldened.",
"Table 6: Average perplexity and BLEU scores (reported in percentages) for the top 10 generations under each inference dimension of Atomic. The the best result for each dimension is emboldened.",
"Table 7: Distinct-1 and distinct-2 scores for the top 10 generations under each inference dimension of Atomic. The the best result for each dimension is emboldened.",
"Table 9: Human evaluation results on Atomic.",
"Table 8: Human evaluation results on Event2Mind.",
"Table 10: An example of inferences made by CWVAE and RNN-based Seq2Seq model under inference dimension “xIntent”."
],
"file": [
"1-Figure1-1.png",
"2-Table1-1.png",
"2-Table2-1.png",
"3-Figure2-1.png",
"3-Figure3-1.png",
"4-Figure4-1.png",
"6-Table3-1.png",
"6-Table4-1.png",
"6-Table5-1.png",
"7-Table6-1.png",
"7-Table7-1.png",
"7-Table9-1.png",
"7-Table8-1.png",
"8-Table10-1.png"
]
}
|
1708.08615
|
Comparing Human and Machine Errors in Conversational Speech Transcription
|
Recent work in automatic recognition of conversational telephone speech (CTS) has achieved accuracy levels comparable to human transcribers, although there is some debate how to precisely quantify human performance on this task, using the NIST 2000 CTS evaluation set. This raises the question what systematic differences, if any, may be found differentiating human from machine transcription errors. In this paper we approach this question by comparing the output of our most accurate CTS recognition system to that of a standard speech transcription vendor pipeline. We find that the most frequent substitution, deletion and insertion error types of both outputs show a high degree of overlap. The only notable exception is that the automatic recognizer tends to confuse filled pauses ("uh") and backchannel acknowledgments ("uhhuh"). Humans tend not to make this error, presumably due to the distinctive and opposing pragmatic functions attached to these words. Furthermore, we quantify the correlation between human and machine errors at the speaker level, and investigate the effect of speaker overlap between training and test data. Finally, we report on an informal"Turing test"asking humans to discriminate between automatic and human transcription error cases.
|
{
"section_name": [
"Introduction",
"Measuring Human Error",
"Machine Transcription System",
"Error Distribution and Correlation",
"Error types",
"A Turing-like Experiment",
"Conclusions",
"Acknowledgments"
],
"paragraphs": [
[
"Automatic speech recognition (ASR) systems have seen remarkable advances over the last half-decade from the use of deep, convolutional and recurrent neural network architectures, enabled by a combination of modeling advances, available training data, and increased computational resources. Given these advances, our research group recently embarked on an effort to reach human-level transcription accuracy using state-of-the-art ASR techniques on one of the genres of speech that has historically served as a difficult benchmark task: conversational telephone speech (CTS). About a decade ago, CTS recognition had served as an evaluation task for government-sponsored work in speech recognition, predating the take-over of deep learning approaches and still largely in the GMM-HMM modeling framework BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . It had proven to be a hard problem, due to the variable nature of conversational pronunciations, speaking styles, and regional accents. Seide at al. BIBREF6 demonstrated that deep networks as acoustic models could achieve significant improvements over GMM-HMM models on CTS data, and more recently researchers at IBM had achieved results on this task that represented a further significant advance BIBREF7 , BIBREF8 over those from a decade ago.",
"The goal of reaching “human parity” in automatic CTS transcription raises the question of what should be considered human accuracy on this task. We operationalized the question by submitting the chosen test data to the same vendor-based transcription pipeline that is used at Microsoft for production data (for model training and internal evaluation purposes), and then comparing the results to ASR system output under the NIST scoring protocol. Using this methodology, and incorporating state-of-the-art convolutional and recurrent network architectures for both acoustic modeling BIBREF9 , BIBREF10 , BIBREF7 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 and language modeling BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 with extensive use of model combination, we obtained a machine error rate that was very slightly below that of the human transcription process (5.8% versus 5.9% on Switchboard data, and 11.0% versus 11.3% on CallHome English data) BIBREF19 . Since then, Saon et al. have reported even better results, along with a separate transcription experiment that puts the human error rate, on the same test data, at a lower point than measured by us (5.1% for Switchboard, 6.8% for CallHome) BIBREF20 .",
"In this paper, we address the question whether there are major qualitative differences between the results of human transcriptions of conversational speech and those obtained by ASR systems, based on a detailed analysis of the data and system output from our human parity experiment BIBREF19 . The question becomes important if ASR is to replace humans as the first step in fully automatic speech understanding systems—if machine transcription errors are qualitatively different from humans then we would have to worry about the possible effects on downstream processing, and mitigation techniques so as to still achieve an overall “natural” user experience (e.g., in real-time conversational speech translation, such as in the Skype application).",
"We start by discussing why human error rate on this task must themselves be considered a moving target. Next we ask whether speech that is difficult for ASR also tends to be hard for humans to transcribe (and vice-versa), and whether the speaker overlap with the training data that is found in a portion of the test data has a noticeable effect on the result, as was suggested in BIBREF20 . We then look at the most frequent word error types exhibited by the two transcription systems (human and machine), and finally report on a very preliminary but still informative experiment to see if humans could tell apart the transcription source (again, human versus machine), based on the errors they make."
],
[
"The assessment of human transcription error on conversational speech has been somewhat murky. A widely cited figure is 4% word error rate (WER), based on BIBREF21 . However, the reference therein is only a “personal communication” without further data. The Linguistics Data Consortium quantified inter-transcriber disagreement for the NIST 2003 CTS evaluation data at between 4.1% and 4.5% with very careful multiple transcriptions BIBREF22 . For “quick transcription”, the disagreement increased to 9.6%. The CTS data in the NIST study is from the Switchboard (SWB) and Fisher corpora, and is therefore comparable to the SWB portion of our data, i.e., coming from telephone conversations between strangers discussing a general-interest topic. Still, the exact dataset is different, which may account for some of the discrepancy with error rates measured on the NIST 2000 set used by us (5.9%) and IBM (5.1%), although the numbers are remarkably close.",
"As briefly described in the introduction, we measured human performance by leveraging an existing pipeline in which Microsoft data is transcribed on a weekly basis. This pipeline uses a large commercial vendor to perform two-pass transcription. In the first pass, a transcriber works from scratch to transcribe the data. In the second pass, a second listener monitors the data to do error correction. Dozens of hours of test data are processed in each batch, with no special instructions to the transcribers. The waveform segments, roughly corresponding to utterances, making up the test set are processed separately. This makes the task easier since the speakers are more clearly separated, but also more difficult since the two sides of the conversation are not interleaved and context may be missing. We performed that text normalization on the human transcripts to remove systematic discrepancies with the NIST scoring references. (Since this was done with some amount of trial and error it effectively was “cheating” for the benefit of the human transcribers.) We then applied the NIST scoring tools to obtain word error rates of 5.9% on the SWB portion, and 11.3% on the CallHome (CH) portion of the NIST 2000 test set. The latter corpus, unlike Switchboard, consists of conversations between friends and family, without seed topic, which would account for the much higher overall error rate. Clearly our method was not designed to achieve the highest possible human transcription accuracy; instead, as pointed out in BIBREF19 , our goal was to establish a benchmark corresponding to industry-standard (i.e. high-volume) professional transcript production.",
"The authors in BIBREF20 undertook to measure human error on the same dataset, but using a more involved process. The major differences were: (1) The transcription vendor was cognizant of the experiment and actively involved. (2) Transcribers were chosen based on past performance and familiarized with the conventions used by LDC in generating the reference transcripts. (3) Three independent, parallel transcribers were used, plus a fourth one for 2nd-pass quality control (QC) of the 1st-pass output. All in all, the transcribers performed roughly 12 to 18 listening passes. (4) The final output was obtained by choosing the transcriber (with QC) who had obtained the lowest WER on the test data. As noted earlier, the resulting WERs were 5.1% and 6.8%, respectively. The considerably lower estimate for CH could be a result of the transcribers having access to the entire conversation (as per personal communication with the authors). This would be especially helpful in transcribing unfamiliar vocabulary and speaking styles (allowing the transcriber to “adapt” to the data more effectively).",
"Clearly the IBM experiment made a much more thorough effort to probe the boundaries of human accuracy, and may in fact have come close to the inter-transcriber agreement previously measured by LDC on a different data set. However, it is important to realize that further improvements on the human side are no doubt achievable. For example, the number of transcribers could be scaled up further, or they could be allowed to confer with each other, to resolve disagreements. This raises the question of where to draw the line on human effort.",
"Finally, it is important to realize that conversational speech has a high degree of inherent ambiguity. For example, conversational pronunciations are highly variable and often reduced BIBREF23 . Another source of ambiguity is the lack of context and knowledge shared by the speakers (especially in the case of CH). In the presence of inherent ambiguity, inter-transcriber agreement can be improved by agreed-upon disambiguation rules, although this would not necessarily reflect true agreement based on speech understanding."
],
[
"The details of our conversational speech recognition system are described elsewhere BIBREF19 , so we only give a brief summary here. The system employs independent decodings by diverse acoustic models, including convolutional neural net (CNN) and bidirectional long short-term memory (BLSTM) models that differ by model architecture, number of senones, amount of training data, and other metaparameters. Decoding uses a pruned 4-gram N-gram language model (LM) to generate lattices, which are then expanded into 500-best lists using a larger N-gram LM. The N-best lists are rescored with multiple LSTM-LMs operating in forward and backward directions. Model scores are combined log-linearly at the utterance level and converted to posterior probabilities represented as word confusion networks. The various subsystems making up the final system are selected in a greedy search, and their weights are optimized via an expectation-maximization algorithm, on development data. The acoustic training data comprises all the publicly available CTS data (about 2000 hours), while the LMs are additionally trained on Broadcast News and Web data from U. Washington. The individual subsystems (based on different acoustic models) achieve word error rates between 6.4% and 7.7% on the Switchboard evaluation set, and between 12.2% and 17.0% on the CallHome portion. Combined, the system achieves 5.8% and 11.0% WER, respectively."
],
[
"We note in passing that machine and human transcription WERs do not differ significantly according the Wilcoxon and Matched Pairs Sentence Segment Word Error tests as applied by NIST, nor do they differ according to a Sign test comparing error counts at the utterance level.",
"A first high-level question regarding the relation between word errors by machine and human transcribers is whether difficulty in one predicts difficulty in the other. Figure FIGREF1 shows scatter plots of speaker-level error rates (machine vs. human), separated by corpus. Each corpus subset has 40 conversation sides.",
"Clearly the errors at that level are correlated, with INLINEFORM0 for SWB and INLINEFORM1 for CH. This suggests that properties of the speech, either as a function of the content, the speaker, or the channel (each speaker occurs in exactly one test conversation), cause errors for both machine and human transcription.",
"We observe that the CH data has two speakers with outlier machine error rates (37.5% and 64.7% WER, solid red dots in Figure FIGREF1 ). These correspond to secondary speakers in their respective conversation sides, each with only a fraction of the speech of the dominant speaker. Note that the ASR system processes each conversation assuming only a single speaker per side. If we remove these outliers, the machine-human error correlation on CH increases to INLINEFORM0 . With secondary speakers excluded, we can also observe that the machine error rates cluster tighter than the human ones in both corpora (SWB: machine INLINEFORM1 vs. human INLINEFORM2 ; CH: machine INLINEFORM3 vs. human INLINEFORM4 ).",
"In BIBREF20 it was sugggested that one of the reasons for the much higher error rate on CH compared to SWB was that 36 of the 40 SWB test speakers occur in the portion of the SWB corpus that is used in training (due to what we surmise to be an oversight in the selection of the NIST 2000 test set). To assess this hypothesis we singled out the four speakers in the SWB portion that are not found in the training set; these are shown as solid black circles in Figure FIGREF1 . At first, it seems that the speaker-averaged WER for the “seen” speakers (machine WER 5.9%) is indeed much lower than for the speakers not found in training (7.5%). However, we can safely attribute this to bad luck and small sample size. The average machine WER of 7.5% for “unseen” speakers is well within one standard deviation of the “seen” speakers' WER distribution ( INLINEFORM0 ), and more tellingly, almost exactly the same relative difference in WERs between “seen” and “unseen” speakers is observed for human transcriptions (6.0% versus 7.7%). Clearly the human transcribers did not have the benefit of training on the “seen” speakers, so the difference must be due to the intrinsic difficulty of the speakers, which affects both transcription systems."
],
[
"Tables TABREF3 – TABREF5 show the top ten types of substitutions, deletions and insertions for both ASR and human transcripts. Inspections reveals that the same short function words, discourse markers and filled pauses appear in the top ten errors for both systems. There is one notable exception, however. The top substitution error for the ASR system involves misrecognition of filled pauses (“%hesitation”, a word class label covering “uh” and “um” in various spellings) as backchannel acknowledgments (“%bcack”, standing for ”uhhuh”, “mhm”, etc.). The same substitution error is much less frequent in human transcripts.",
"A possible explanation for this asymmetry lies in the discourse functions of filled pauses and backchannels. Filled pauses serve to either claim or retain the floor, signaling that the speaker wants to either start or continue speaking. Backchannels, on the other hand, acknowledge that the speaker is listening, and that the other speaker should carry on. Since the two classes of words thus have exactly opposite functions in turn management, it stands to reason that humans are keenly aware of their differences and use all available phonetic, prosodic, and contextual cues to distinguish then. Our ASR system, by contrast, uses only its standard acoustic-phonetic and language models. Modeling dialog context in particular would be expected to improve this shortcoming."
],
[
"Having established that human and machine transcriptions are quite similar in several aspects, including the word token types involved, we were wondering if higher-level error patterns could distinguish the two systems. For example, one might expect that human misrecognitions are guided by a strong “human” language and understanding model, whereas machine errors might be more likely to generate syntactic and semantic nonsense. To get at this question we designed a specialized version of the classic Turing test, in the sense that a human judge is asked to interact with a system with the goal of estimating whether it is underpinned by human or artificial “intelligence.” In our case, the task involved inspecting one randomly chosen utterance from the test set at a time, with a side-by-side display of the reference transcript, the human transcript, and the ASR output (after the text normalizations that are part of the scoring protocol). Only utterances having at least one transcription error and a discrepancy between the two versions are presented. Discrepancies between the transcript versions are highlighted, and the error type (substitution, insertion, deletion) is visually coded as well, as shown in Figure FIGREF7 .",
"We ran this informal experiment during four days on the exhibitor floor of the 2017 IEEE ICASSP conference in New Orleans. The players were not formally recruited or characterized, but consisted of conference attendees who for the most part had some background or experience in speech processing. Subjects were introduced to the test by explaining the research background, and were allowed to play as many trials as they wanted. Out of a total of 353 trials, subjects identified the human transcript correctly 188 times, for an overall success rate of 53%. The successes included occasional gimmes like human misspellings or the asymmetry in the filled pause/backchannel substitution (which we pointed out to the subjects). According to a binomial test, this success rate does not differ signficantly from the 50% chance rate ( INLINEFORM0 , one-tailed). While this result is obviously quite preliminary, it was a good demonstration that it is not easy distinguishing machine from human errors, even for technically sophisticated observers."
],
[
"We have discussed methodological issues and reported first findings when comparing automatic conversational speech transcriptions to human performance, using data generated by our recent efforts to reach human parity in CTS recognition. While an exact characterization of the human benchmark remains a moving target that is subject to debate, our results so far have shown that machine transcription errors track those made by humans in several important aspects. At the speaker (as well as corpus) level the two error rates are strongly correlated, suggesting that common underlying factors in the speech data determine transcription difficulty for both humans and ASR systems. (A detailed characterization of those factors has precedent in ASR research and should be revisited while also considering human performance.) A partial overlap of Switchboard training and test speakers seems to have no major effect on error rates. We also find that the most frequent error patterns involve the same short function words and discourse particles for both humans and machines. The one notable exception is that ASR tends to confuse filled pauses and backchannels, a functional distinction that humans need to be very good at pragmatically. An informal Turing-like test also demonstrated that error patterns in the two types of transcriptions are not obviously distinguishable. Overall, we conclude that recent advances in ASR technology have not only achieved remarkable levels of accuracy, but also generate results that are qualitatively surprisingly similar to professional human transcriber output."
],
[
"We thank our coauthors and collaborators on the Human Parity project: X. Huang, F. Seide, M. Seltzer, W. Xiong, D. Yu, and G. Zweig. Thanks to K. Riedhammer for sharing metadata on train/test speaker overlap."
]
]
}
|
{
"question": [
"what standard speech transcription pipeline was used?"
],
"question_id": [
"95d8368b1055d97250df38d1e8c4a2b283d2b57e"
],
"nlp_background": [
""
],
"topic_background": [
""
],
"paper_read": [
""
],
"search_query": [
""
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"pipeline that is used at Microsoft for production data"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The goal of reaching “human parity” in automatic CTS transcription raises the question of what should be considered human accuracy on this task. We operationalized the question by submitting the chosen test data to the same vendor-based transcription pipeline that is used at Microsoft for production data (for model training and internal evaluation purposes), and then comparing the results to ASR system output under the NIST scoring protocol. Using this methodology, and incorporating state-of-the-art convolutional and recurrent network architectures for both acoustic modeling BIBREF9 , BIBREF10 , BIBREF7 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 and language modeling BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 with extensive use of model combination, we obtained a machine error rate that was very slightly below that of the human transcription process (5.8% versus 5.9% on Switchboard data, and 11.0% versus 11.3% on CallHome English data) BIBREF19 . Since then, Saon et al. have reported even better results, along with a separate transcription experiment that puts the human error rate, on the same test data, at a lower point than measured by us (5.1% for Switchboard, 6.8% for CallHome) BIBREF20 ."
],
"highlighted_evidence": [
"We operationalized the question by submitting the chosen test data to the same vendor-based transcription pipeline that is used at Microsoft for production data (for model training and internal evaluation purposes), and then comparing the results to ASR system output under the NIST scoring protocol. "
]
}
],
"annotation_id": [
"1221d3bb8506cd725f8c6de105786d755804d8d2"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
]
}
|
{
"caption": [
"Figure 1: Correlation between machine and human word error rates at speaker level. The solid black circles represent SWB speakers not seen in training. The solid red circles stand for secondary CH speakers that share a conversation side with a dominating primary speaker.",
"Figure 2: Turing-like test challenging human players to tell machine from human transcripts",
"Table 1: Most common substitutions for ASR system and humans. The number of times each error occurs is followed by the word in the reference, and what appears in the hypothesis instead.",
"Table 2: Most common deletions for ASR system and humans.",
"Table 3: Most common insertions for ASR system and humans."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"4-Table1-1.png",
"4-Table2-1.png",
"4-Table3-1.png"
]
}
|
1701.03214
|
An Empirical Comparison of Simple Domain Adaptation Methods for Neural Machine Translation
|
In this paper, we propose a novel domain adaptation method named"mixed fine tuning"for neural machine translation (NMT). We combine two existing approaches namely fine tuning and multi domain NMT. We first train an NMT model on an out-of-domain parallel corpus, and then fine tune it on a parallel corpus which is a mix of the in-domain and out-of-domain corpora. All corpora are augmented with artificial tags to indicate specific domains. We empirically compare our proposed method against fine tuning and multi domain methods and discuss its benefits and shortcomings.
|
{
"section_name": [
"Introduction",
"Related Work",
"Methods for Comparison",
"Fine Tuning",
"Multi Domain",
"Mixed Fine Tuning",
"Experimental Settings",
"High Quality In-domain Corpus Setting",
"Low Quality In-domain Corpus Setting",
"MT Systems",
"Results",
"Conclusion"
],
"paragraphs": [
[
"One of the most attractive features of neural machine translation (NMT) BIBREF0 , BIBREF1 , BIBREF2 is that it is possible to train an end to end system without the need to deal with word alignments, translation rules and complicated decoding algorithms, which are a characteristic of statistical machine translation (SMT) systems. However, it is reported that NMT works better than SMT only when there is an abundance of parallel corpora. In the case of low resource domains, vanilla NMT is either worse than or comparable to SMT BIBREF3 .",
"Domain adaptation has been shown to be effective for low resource NMT. The conventional domain adaptation method is fine tuning, in which an out-of-domain model is further trained on in-domain data BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . However, fine tuning tends to overfit quickly due to the small size of the in-domain data. On the other hand, multi domain NMT BIBREF8 involves training a single NMT model for multiple domains. This method adds tags “<2domain>\" by modifying the parallel corpora to indicate domains without any modifications to the NMT system architecture. However, this method has not been studied for domain adaptation in particular.",
"Motivated by these two lines of studies, we propose a new domain adaptation method called “mixed fine tuning,\" where we first train an NMT model on an out-of-domain parallel corpus, and then fine tune it on a parallel corpus that is a mix of the in-domain and out-of-domain corpora. Fine tuning on the mixed corpus instead of the in-domain corpus can address the overfitting problem. All corpora are augmented with artificial tags to indicate specific domains. We tried two different corpora settings:",
"We observed that “mixed fine tuning\" works significantly better than methods that use fine tuning and domain tag based approaches separately. Our contributions are twofold:"
],
[
"Besides fine tuning and multi domian NMT using tags, another direction for domain adaptation is using in-domain monolingual data. Either training an in-domain recurrent neural language (RNN) language model for the NMT decoder BIBREF13 or generating synthetic data by back translating target in-domain monolingual data BIBREF5 have been studied."
],
[
"All the methods that we compare are simple and do not need any modifications to the NMT system."
],
[
"Fine tuning is the conventional way for domain adaptation, and thus serves as a baseline in this study. In this method, we first train an NMT system on a resource rich out-of-domain corpus till convergence, and then fine tune its parameters on a resource poor in-domain corpus (Figure 1 )."
],
[
"The multi domain method is originally motivated by BIBREF14 , which uses tags to control the politeness of NMT translations. The overview of this method is shown in the dotted section in Figure 2 . In this method, we simply concatenate the corpora of multiple domains with two small modifications: a. Appending the domain tag “<2domain>\" to the source sentences of the respective corpora. This primes the NMT decoder to generate sentences for the specific domain. b. Oversampling the smaller corpus so that the training procedure pays equal attention to each domain.",
"We can further fine tune the multi domain model on the in-domain data, which is named as “multi domain + fine tuning.”"
],
[
"The proposed mixed fine tuning method is a combination of the above methods (shown in Figure 2 ). The training procedure is as follows:",
"Train an NMT model on out-of-domain data till convergence.",
"Resume training the NMT model from step 1 on a mix of in-domain and out-of-domain data (by oversampling the in-domain data) till convergence.",
"By default, we utilize domain tags, but we also consider settings where we do not use them (i.e., “w/o tags”). We can further fine tune the model from step 2 on the in-domain data, which is named as “mixed fine tuning + fine tuning.”",
"Note that in the “fine tuning” method, the vocabulary obtained from the out-of-domain data is used for the in-domain data; while for the “multi domain” and “mixed fine tuning” methods, we use a vocabulary obtained from the mixed in-domain and out-of-domain data for all the training stages."
],
[
"We conducted NMT domain adaptation experiments in two different settings as follows:"
],
[
"Chinese-to-English translation was the focus of the high quality in-domain corpus setting. We utilized the resource rich patent out-of-domain data to augment the resource poor spoken language in-domain data. The patent domain MT was conducted on the Chinese-English subtask (NTCIR-CE) of the patent MT task at the NTCIR-10 workshop BIBREF9 . The NTCIR-CE task uses 1000000, 2000, and 2000 sentences for training, development, and testing, respectively. The spoken domain MT was conducted on the Chinese-English subtask (IWSLT-CE) of the TED talk MT task at the IWSLT 2015 workshop BIBREF10 . The IWSLT-CE task contains 209,491 sentences for training. We used the dev 2010 set for development, containing 887 sentences. We evaluated all methods on the 2010, 2011, 2012, and 2013 test sets, containing 1570, 1245, 1397, and 1261 sentences, respectively."
],
[
"Chinese-to-Japanese translation was the focus of the low quality in-domain corpus setting. We utilized the resource rich scientific out-of-domain data to augment the resource poor Wikipedia (essentially open) in-domain data. The scientific domain MT was conducted on the Chinese-Japanese paper excerpt corpus (ASPEC-CJ) BIBREF11 , which is one subtask of the workshop on Asian translation (WAT) BIBREF15 . The ASPEC-CJ task uses 672315, 2090, and 2107 sentences for training, development, and testing, respectively. The Wikipedia domain task was conducted on a Chinese-Japanese corpus automatically extracted from Wikipedia (WIKI-CJ) BIBREF12 using the ASPEC-CJ corpus as a seed. The WIKI-CJ task contains 136013, 198, and 198 sentences for training, development, and testing, respectively."
],
[
"For NMT, we used the KyotoNMT system BIBREF16 . The NMT training settings are the same as those of the best systems that participated in WAT 2016. The sizes of the source and target vocabularies, the source and target side embeddings, the hidden states, the attention mechanism hidden states, and the deep softmax output with a 2-maxout layer were set to 32,000, 620, 1000, 1000, and 500, respectively. We used 2-layer LSTMs for both the source and target sides. ADAM was used as the learning algorithm, with a dropout rate of 20% for the inter-layer dropout, and L2 regularization with a weight decay coefficient of 1e-6. The mini batch size was 64, and sentences longer than 80 tokens were discarded. We early stopped the training process when we observed that the BLEU score of the development set converges. For testing, we self ensembled the three parameters of the best development loss, the best development BLEU, and the final parameters. Beam size was set to 100.",
"For performance comparison, we also conducted experiments on phrase based SMT (PBSMT). We used the Moses PBSMT system BIBREF17 for all of our MT experiments. For the respective tasks, we trained 5-gram language models on the target side of the training data using the KenLM toolkit with interpolated Kneser-Ney discounting, respectively. In all of our experiments, we used the GIZA++ toolkit for word alignment; tuning was performed by minimum error rate training BIBREF18 , and it was re-run for every experiment.",
"For both MT systems, we preprocessed the data as follows. For Chinese, we used KyotoMorph for segmentation, which was trained on the CTB version 5 (CTB5) and SCTB BIBREF19 . For English, we tokenized and lowercased the sentences using the tokenizer.perl script in Moses. Japanese was segmented using JUMAN BIBREF20 .",
"For NMT, we further split the words into sub-words using byte pair encoding (BPE) BIBREF21 , which has been shown to be effective for the rare word problem in NMT. Another motivation of using sub-words is making the different domains share more vocabulary, which is important especially for the resource poor domain. For the Chinese-to-English tasks, we trained two BPE models on the Chinese and English vocabularies, respectively. For the Chinese-to-Japanese tasks, we trained a joint BPE model on both of the Chinese and Japanese vocabularies, because Chinese and Japanese could share some vocabularies of Chinese characters. The number of merge operations was set to 30,000 for all the tasks."
],
[
"Tables 1 and 2 show the translation results on the Chinese-to-English and Chinese-to-Japanese tasks, respectively. The entries with SMT and NMT are the PBSMT and NMT systems, respectively; others are the different methods described in Section \"Methods for Comparison\" . In both tables, the numbers in bold indicate the best system and all systems that were not significantly different from the best system. The significance tests were performed using the bootstrap resampling method BIBREF22 at $p < 0.05$ .",
"We can see that without domain adaptation, the SMT systems perform significantly better than the NMT system on the resource poor domains, i.e., IWSLT-CE and WIKI-CJ; while on the resource rich domains, i.e., NTCIR-CE and ASPEC-CJ, NMT outperforms SMT. Directly using the SMT/NMT models trained on the out-of-domain data to translate the in-domain data shows bad performance. With our proposed “Mixed fine tuning\" domain adaptation method, NMT significantly outperforms SMT on the in-domain tasks.",
"Comparing different domain adaptation methods, “Mixed fine tuning” shows the best performance. We believe the reason for this is that “Mixed fine tuning” can address the over-fitting problem of “Fine tuning.” We observed that while “Fine tuning” overfits quickly after only 1 epoch of training, “Mixed fine tuning” only slightly overfits until covergence. In addition, “Mixed fine tuning” does not worsen the quality of out-of-domain translations, while “Fine tuning” and “Multi domain” do. One shortcoming of “Mixed fine tuning” is that compared to “fine tuning,” it took a longer time for the fine tuning process, as the time until convergence is essentially proportional to the size of the data used for fine tuning.",
"“Multi domain” performs either as well as (IWSLT-CE) or worse than (WIKI-CJ) “Fine tuning,” but “Mixed fine tuning” performs either significantly better than (IWSLT-CE) or is comparable to (WIKI-CJ) “Fine tuning.” We believe the performance difference between the two tasks is due to their unique characteristics. As WIKI-CJ data is of relatively poorer quality, mixing it with out-of-domain data does not have the same level of positive effects as those obtained by the IWSLT-CE data.",
"The domain tags are helpful for both “Multi domain” and “Mixed fine tuning.” Essentially, further fine tuning on in-domain data does not help for both “Multi domain” and “Mixed fine tuning.” We believe the reason for this is that the “Multi domain” and “Mixed fine tuning” methods already utilize the in-domain data used for fine tuning."
],
[
"In this paper, we proposed a novel domain adaptation method named “mixed fine tuning” for NMT. We empirically compared our proposed method against fine tuning and multi domain methods, and have shown that it is effective but is sensitive to the quality of the in-domain data used.",
"In the future, we plan to incorporate an RNN model into our current architecture to leverage abundant in-domain monolingual corpora. We also plan on exploring the effects of synthetic data by back translating large in-domain monolingual corpora. "
]
]
}
|
{
"question": [
"How much improvement does their method get over the fine tuning baseline?",
"What kinds of neural networks did they use in this paper?",
"How did they use the domain tags?"
],
"question_id": [
"a978a1ee73547ff3a80c66e6db3e6c3d3b6512f4",
"46ee1cbbfbf0067747b28bdf4c8c2f7dc8955650",
"4f12b41bd3bb2610abf7d7835291496aa69fb78c"
],
"nlp_background": [
"five",
"five",
"five"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"domain adaptation",
"domain adaptation",
"domain adaptation"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "0.08 points on the 2011 test set, 0.44 points on the 2012 test set, 0.42 points on the 2013 test set for IWSLT-CE.",
"evidence": [
"FLOAT SELECTED: Table 1: Domain adaptation results (BLEU-4 scores) for IWSLT-CE using NTCIR-CE."
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Domain adaptation results (BLEU-4 scores) for IWSLT-CE using NTCIR-CE."
]
}
],
"annotation_id": [
"f92d4930c3a5af4cac3ed3b914ec9a554dfeade4"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"LSTMs"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"For NMT, we used the KyotoNMT system BIBREF16 . The NMT training settings are the same as those of the best systems that participated in WAT 2016. The sizes of the source and target vocabularies, the source and target side embeddings, the hidden states, the attention mechanism hidden states, and the deep softmax output with a 2-maxout layer were set to 32,000, 620, 1000, 1000, and 500, respectively. We used 2-layer LSTMs for both the source and target sides. ADAM was used as the learning algorithm, with a dropout rate of 20% for the inter-layer dropout, and L2 regularization with a weight decay coefficient of 1e-6. The mini batch size was 64, and sentences longer than 80 tokens were discarded. We early stopped the training process when we observed that the BLEU score of the development set converges. For testing, we self ensembled the three parameters of the best development loss, the best development BLEU, and the final parameters. Beam size was set to 100."
],
"highlighted_evidence": [
"For NMT, we used the KyotoNMT system BIBREF16 . The NMT training settings are the same as those of the best systems that participated in WAT 2016. The sizes of the source and target vocabularies, the source and target side embeddings, the hidden states, the attention mechanism hidden states, and the deep softmax output with a 2-maxout layer were set to 32,000, 620, 1000, 1000, and 500, respectively. We used 2-layer LSTMs for both the source and target sides. ADAM was used as the learning algorithm, with a dropout rate of 20% for the inter-layer dropout, and L2 regularization with a weight decay coefficient of 1e-6. The mini batch size was 64, and sentences longer than 80 tokens were discarded."
]
}
],
"annotation_id": [
"12335d0c788b511cd38f82941b7e5bba2fe24e21"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Appending the domain tag “<2domain>\" to the source sentences of the respective corpora"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The multi domain method is originally motivated by BIBREF14 , which uses tags to control the politeness of NMT translations. The overview of this method is shown in the dotted section in Figure 2 . In this method, we simply concatenate the corpora of multiple domains with two small modifications: a. Appending the domain tag “<2domain>\" to the source sentences of the respective corpora. This primes the NMT decoder to generate sentences for the specific domain. b. Oversampling the smaller corpus so that the training procedure pays equal attention to each domain."
],
"highlighted_evidence": [
"In this method, we simply concatenate the corpora of multiple domains with two small modifications: a. Appending the domain tag “<2domain>\" to the source sentences of the respective corpora. "
]
}
],
"annotation_id": [
"65f0a6719b495621b5ad95e39f4305074795673f"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
]
}
|
{
"caption": [
"Figure 1: Fine tuning for domain adaptation",
"Figure 2: Tag based multi domain NMT",
"Table 1: Domain adaptation results (BLEU-4 scores) for IWSLT-CE using NTCIR-CE.",
"Table 2: Domain adaptation results (BLEU-4 scores) for WIKI-CJ using ASPEC-CJ."
],
"file": [
"2-Figure1-1.png",
"2-Figure2-1.png",
"3-Table1-1.png",
"3-Table2-1.png"
]
}
|
1709.05411
|
Combining Search with Structured Data to Create a More Engaging User Experience in Open Domain Dialogue
|
The greatest challenges in building sophisticated open-domain conversational agents arise directly from the potential for ongoing mixed-initiative multi-turn dialogues, which do not follow a particular plan or pursue a particular fixed information need. In order to make coherent conversational contributions in this context, a conversational agent must be able to track the types and attributes of the entities under discussion in the conversation and know how they are related. In some cases, the agent can rely on structured information sources to help identify the relevant semantic relations and produce a turn, but in other cases, the only content available comes from search, and it may be unclear which semantic relations hold between the search results and the discourse context. A further constraint is that the system must produce its contribution to the ongoing conversation in real-time. This paper describes our experience building SlugBot for the 2017 Alexa Prize, and discusses how we leveraged search and structured data from different sources to help SlugBot produce dialogic turns and carry on conversations whose length over the semi-finals user evaluation period averaged 8:17 minutes.
|
{
"section_name": [
"Introduction",
"Modeling Discourse Coherence",
"Mixed Initiative Dialogue",
"Natural Language Generation",
"Conclusions"
],
"paragraphs": [
[
"The Alexa Prize funded 12 international teams to compete to create a conversational agent that can discuss any topic for at least 20 minutes. UCSC's Slugbot was one of these funded teams. The greatest challenges with the competition arise directly from the potential for ongoing mixed-initiative multi-turn dialogues, which do not follow a particular plan or pursue a particular fixed information need. This paper describes some of the lessons we learned building SlugBot for the 2017 Alexa Prize, particularly focusing on the challenges of integrating content found via search with content from structured data in order to carry on an ongoing, coherent, open-domain, mixed-initiative conversation. SlugBot's conversations over the semi-finals user evaluation averaged 8:17 minutes.",
"Unlike much previous work on conversational AI, SlugBot could not and did not assume that the user had an “information need” BIBREF0 , BIBREF1 , BIBREF2 . Rather, the design of the Alexa Prize was aimed at open conversations that could engage the user, through any type of dialogue or chitchat, discussing films and books, gossiping about celebrities, playing verbal games, telling stories or sharing experiences, or any other of many different types of activities that conversation is often used for.",
"This open design foregrounds many longstanding challenges that have not been solved even for task-oriented dialogue systems. These include:",
"This paper is structured around the “lessons learned” with respect to these challenges from our experience building SlugBot. To be clear, we are not offering a solution to these problems: instead our intention is simply to highlight the difficulties with developing adequate computational models of these phenomena that particularly arise in the context of open-domain conversations, where users cannot be assumed to be pursuing a particular task or information need. We will attempt to motivate our hypothesis that a comprehensive solution to these challenges for open-domain dialogue requires a much deeper understanding and utilization of the semantic relations that underly dialogue coherence.",
"For example, consider dialogue focused on content related to the movie domain. This should be one of the easiest domains because it is well-structured, and there are existing systems handling conversations where there is a specified user information need or task, such as finding films with particular properties, finding out what is playing and where, or booking a movie ticket BIBREF3 , BIBREF4 , BIBREF5 . Moreover, the Internet Movie Database (IMDB) BIBREF6 provides information on plot, rating, and actors that can be leveraged to support conversations. IMDB also makes use of the Schema.org BIBREF7 structure to connect common entities to their related attribute types (such as Actor $\\rightarrow $ Person $\\rightarrow $ birthDate), allowing the system to retrieve a large set of possible next topics and related facts and entities.",
"However, remember that SlugBot is based on the assumption that the user might simply enjoy talking about films and related entities and therefore may freely move the conversational focus among different movie entities, along with the vast array of semantically-associated movie attributes: movies have actors, genres, plots, and awards; actors have names, affiliations, other movies they were in, awards, etc. Actors are people, who have spouses, families and friends, and engage in other life activities besides acting, such as political advocacy.",
"A potential dialogue is shown in Table 1 . The interaction might appear to be simple enough: the user chooses to discuss movies, and selects Jason Bourne as the specific movie she is interested in, the system finds the movie in IMDB, and then provides information on its rating, lead actor, and plot. The user then changes the topic to other movies with the same actor, and the conversation continues.",
"Even with the availability of IMDB, however, the interaction is not totally straightforward. The RHS of Table 1 describes some of the required competencies and decisions SlugBot must make. First, Slugbot must be able to perform coreference resolution and recognize that the movie and it in turns U6 and U8 are coreferential. We estimate the accuracy of noun-phrase coreference resolution to only be about 70% for off-the-shelf tools applied to dialogue, since most of them are targeted to text BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 .",
"More challenging is that at each system turn, there are a large number of conversational moves that are possible. Making good decisions about what to say next requires balancing a dialogue policy as to what dialogue acts might be good in this context, with real-time information as to what types of content might be possible to use in this context. Slugbot could offer an opinion as in turn S3, ask a follow-on question as in S3, take the initiative to provide unasked for information, as in S5, or decide, e.g. in the case of the user's request for plot information, to use search to retrieve some relevant content. Search cannot be used effectively here without constructing an appropriate query, or knowing in advance where plot information might be available. In a real-time system, live search may not be able to achieve the required speed and efficiency, so preprocessing or caching of relevant information may be necessary. Finally, most other domains do not have such high quality structured data available, leaving us to develop or try to rely on more general models of discourse coherence."
],
[
"In open-domain conversation, dialogue coherence between related turns must be maintained. What underlies dialogue coherence goes beyond simple word overlap or similarity, and its clear that neural models of open-domain conversational dialogue do not yet capture it. Theories of discourse posit that there are a small number of semantic relations that can hold between adjacent turns: at the most general level these are contingency, comparison, expansion, and temporal order BIBREF16 , BIBREF17 , BIBREF18 . We posit that one way to allow SlugBot to take the initiative and produce a turn that maintains discourse coherence is to find content to use in Slugbot's next turn that instantiates a valid semantic relation between the current user turn and SlugBot's next turn. One of the strongest bases for such semantic relations are the relations captured by ontologies or frames, which give us related entities, e.g. movies have actors and directors BIBREF4 , BIBREF21 . These types of relations can be used to instantiate the expansion relation, which basically captures moving to strongly related subtopics, often by chaining off a particular discourse entity. To find content to instantiate the expansion relation to use in Slugbot's next turn (taking the initiative), we carry out the following pipeline:",
"In the case of movies, the structure of IMDB, as discussed above, allows us to link between related entities and attributes using various DB keys. However other conversational domains do not have freely available richly structured information such as this. It is rare for a single resource to aggregate all the information that might be useful, so SlugBot must be able to leverage information and integrate information from multiple sources. But state-of-the-art knowledge bases and ontologies are still limited. Table 2 lists some of the resources that we have found to be most useful for search and structured information.",
"Like movies, sports is another domain that has rich structure, and in which there is broad user interest. Search results for a query about \"Madison Bumgarner\" are in Figure 1 , showcasing a sample of the different information retrievable from each source (Step 2 of the pipeline).",
"From the Google Knowledge Graph (Figure 1 result we are able to ascertain the entity type, a brief description, and a relevant Wikipedia page (Figure 1 ) which we can use to find accurate structured information. We may further augment our knowledge by using the information returned by the Google Knowledge Graph as parameters to our YAGO or DBpedia query which can more easily extract specific relationships between an entity-attribute. For example, the results returned by YAGO for the \"Madison Bumgarner\" query contains a connection to the headline Struggling MadBum might not garner next start, which is contextually relevant data not encapsulated anywhere in the previously examined results.",
"There, however, exists a disconnect between the resources, i.e. some entities are available in one resource and not another, or there may be inconsistent information across resources. While it would be nice not to have to anticipate the types of integration that are needed, our take-away from this, is that at present, it appears we have to accomplish the steps in our pipeline by integrating knowledge from different resources in advance, even though projects such as YAGO have already been working on such integration for at least ten years.",
"Other discourse coherence relations besides expansion are also viable candidates for selecting content for next turns, but finding content that instantiates these relations can be a challenging problem in itself. For example, in casual conversation, it is common to provide opinions and then perhaps further take the initiative and justify them. The justification of an opinion is a type of contingency relation: we describe how we curate content to provide justifications in Section \"Mixed Initiative Dialogue\" .",
"We have also been able to use the temporal relation in a limited way by drawing on narratively structured sources, such as personal stories in blogs. Since these stories are told in temporal order, we can repurpose the content of these blogs to tell stories, maintaining pre-existing narrative coherence when the system produces a sequence of turns BIBREF33 . However, we posit that there is much more that could be done to make better use of deep semantic discourse relations for recognizing discourse relations and generating coherent conversational turns."
],
[
"Mixed Initiative dialogue is key to a natural conversational interaction BIBREF34 , BIBREF35 , BIBREF36 , BIBREF37 , BIBREF38 , BIBREF2 , and this is even more important for open domain dialogue than it is for task-oriented or information seeking dialogue. One of our primary hypotheses, as described above, is that good models of discourse coherence will help SlugBot identify content that can be used to take the initiative. However, models of discourse coherence have been rarely applied to conversation BIBREF39 , BIBREF40 , BIBREF41 and thus there is considerable work to be done simply in understanding how these relations can be instantiated in dialogue.",
"In addition, a further challenge arises from the fact that both system and user options for dialogue acts are extremely varied at each turn, e.g. user intents can be to provide opinions, give or solicit information, contrast two possibilities, request the system to perform an action, and more. One reasonable taxonomy for the types of dialogue acts that might be available to SlugBot could be based for example on the dialogue act annotations in the Switchboard corpus BIBREF42 .",
"Here, we consider a simple case combining discourse relations and dialogue acts that we have implemented in Slugbot in order to take the initiative in a way that we hoped the user would find interesting. Our aim was to utilize the contingency discourse relation to connect a statement of opinion and its justification. We designed a template containing both arguments of the contingency relation, namely I think $\\lbrace entity\\rbrace $ is $\\lbrace sentiment\\rbrace $ because $\\lbrace justification\\rbrace $ . We construct a table of argument pairs that can instantiate this relation, as shown in Table 3 . This table can be populated by crowd-sourcing or by using search as a pre-processing step.",
"Table 4 illustrates how this is used in our conversations about comics. At Line 6, when the user asks Who is your favorite character?, it is most appropriate to provide an opinion. It is difficult to imagine retrieving search-based data which contains a contextually relevant opinion, but it is even more difficult to imagine that if search had returned such an opinion, that search could be used a second time in order to retrieve a justification for the provided opinion and answer the user's follow-up question in Line 8, Okay why?. The source text for the search would have to be annotated for the type of content that could be used to provide justifications, and search would have to support these types of semantic relations."
],
[
"The current challenges for natural language generation, in our view, arise from the need to combine information from structured and unstructured sources when producing conversational utterances. SlugBot currently uses a combination of pre-written templates, sentence selection, and techniques for telling stories that are based on converting monologic stories to dialogic sequences BIBREF33 .",
"Structured data, when available, can do more than structure a search result: it can also be easier to use within a conversation because it provides the necessary structure needed for high precision natural language generation BIBREF22 , BIBREF43 . More precisely, a small set of generic templates with various slots can be filled with information from structured data sources to insure high quality, accurate responses. These generic templates can be hand crafted, or prepared in advance by learning natural language generation templates automatically from appropriate conversational domain sources such as different types of user-generated content BIBREF44 , BIBREF23 , as illustrated in our justification initiatives above in Section \"Mixed Initiative Dialogue\" .",
"For general fact-based questions, on the other hand, search content can be used directly. For example, at line 14 in Table 5 when the user asks What was the first movie to feature a vampire?, search provides us with a good response. This introduces however the challenge of updating the discourse context with the right representation of the two movies under discussion, so that they can then be available for follow-on coreference. This is an open problem.",
"It is clear that in order to use a semi-structured approach, we need to determine when to utilize each source. Structured data can be easier to formulate into system responses and can often more easily handle on-topic follow-up questions, but is more limited in scope. An obvious approach, also used in the Watson Jeopardy system BIBREF45 , is to pool responses from both sources and rank them. We have not, to date, collected enough data to build a ranker.",
"Our plan is to apply a combination of reinforcement learning and learning of ranking functions for utterance variants in a particular context to SlugBot conversations as we move forward with our own data collection, outside of the Alexa Prize competition BIBREF46 , BIBREF47 , BIBREF48 , BIBREF49 , BIBREF50 . The first step however is to use the Alexa Prize competition data to learn a Paradise-Open-Domain evaluation function, with additional metrics relevant to open-domain dialogue, e.g. independent variable metrics that predict overall dialogue quality such as response delay, vocabulary diversity, dialogue act sequence n-grams BIBREF51 , conversational depth, number of reprompts BIBREF52 , and other measures that can be automatically logged. Many of the required measures have been used over the last 20 years in Paradise to evaluate task-oriented dialogue systems and they remain highly relevant to overall dialogue quality in open-domain dialogue systems BIBREF53 , BIBREF54 , BIBREF55 . We predict this can potentially improve the overall performance of the system as demonstrated in Table 6 . Here, the structured data is sparse, resulting in an uninteresting response, while search returns a very robust answer. Our Paradise-Open-Domain evaluation function would need to learn to place priority on the result returned by search, through ranking, despite having structured data.",
"For open domain NLG, we have also conducted experiments with neural sequence to sequence approaches using open domain corpora such as film dialogue, Big Bang theory scripts, and open subtitles. These approaches to date do not produce interesting utterances that maintain discourse coherence. It is possible that further curation and semantic annotation of these resources, e.g. by labelling semantic roles and identifying dialogue acts and discourse relations might be helpful, but this could also introduce data sparsity. For example in Switchboard the dialogue act distribution is highly skewed. Integrating information across multiple sources could also be further explored BIBREF33 . Recent work on hybrid neural generation approaches that use knowledge of sentence and discourse planning structures also seem promising BIBREF24 , BIBREF48 , BIBREF56 ."
],
[
"In this paper, we describe some of the challenges we encountered building SlugBot, an open domain conversational agent funded by the Amazon Alexa Prize. We have introduced more problems than we have solved, and we have attempted to support our hypothesis that we need richer models of discourse coherence and discourse semantics to allow a conversational agent to take the initiative in open domain conversations. We illustrated how search and structured information can be combined in order for SlugBot to find content to use to take the initiative and respond to the user's utterances. We propose a hybrid approach for language generation that which combines templates to generate responses with sentence selection from search, and we show examples in different domains to demonstrate real-world use cases that make use of our approach. For future work, we plan to bring together resources that provide structured data from different sources into a single, accessible framework, to supply personal assistants with scalable knowledge bases that will power more natural, mixed initiative, and engaging conversations. We believe that it will be possible in the next few years to build conversational agents that can carry on a conversation for 20 minutes about many different topics."
]
]
}
|
{
"question": [
"Why mixed initiative multi-turn dialogs are the greatest challenge in building open-domain conversational agents?"
],
"question_id": [
"65e6a1cc2590b139729e7e44dce6d9af5dd2c3b5"
],
"nlp_background": [
"infinity"
],
"topic_background": [
"familiar"
],
"paper_read": [
"no"
],
"search_query": [
""
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"do not follow a particular plan or pursue a particular fixed information need",
" integrating content found via search with content from structured data",
"at each system turn, there are a large number of conversational moves that are possible",
"most other domains do not have such high quality structured data available",
"live search may not be able to achieve the required speed and efficiency"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The Alexa Prize funded 12 international teams to compete to create a conversational agent that can discuss any topic for at least 20 minutes. UCSC's Slugbot was one of these funded teams. The greatest challenges with the competition arise directly from the potential for ongoing mixed-initiative multi-turn dialogues, which do not follow a particular plan or pursue a particular fixed information need. This paper describes some of the lessons we learned building SlugBot for the 2017 Alexa Prize, particularly focusing on the challenges of integrating content found via search with content from structured data in order to carry on an ongoing, coherent, open-domain, mixed-initiative conversation. SlugBot's conversations over the semi-finals user evaluation averaged 8:17 minutes.",
"More challenging is that at each system turn, there are a large number of conversational moves that are possible. Making good decisions about what to say next requires balancing a dialogue policy as to what dialogue acts might be good in this context, with real-time information as to what types of content might be possible to use in this context. Slugbot could offer an opinion as in turn S3, ask a follow-on question as in S3, take the initiative to provide unasked for information, as in S5, or decide, e.g. in the case of the user's request for plot information, to use search to retrieve some relevant content. Search cannot be used effectively here without constructing an appropriate query, or knowing in advance where plot information might be available. In a real-time system, live search may not be able to achieve the required speed and efficiency, so preprocessing or caching of relevant information may be necessary. Finally, most other domains do not have such high quality structured data available, leaving us to develop or try to rely on more general models of discourse coherence."
],
"highlighted_evidence": [
"The greatest challenges with the competition arise directly from the potential for ongoing mixed-initiative multi-turn dialogues, which do not follow a particular plan or pursue a particular fixed information need. ",
"This paper describes some of the lessons we learned building SlugBot for the 2017 Alexa Prize, particularly focusing on the challenges of integrating content found via search with content from structured data in order to carry on an ongoing, coherent, open-domain, mixed-initiative conversation",
"More challenging is that at each system turn, there are a large number of conversational moves that are possible.",
" Finally, most other domains do not have such high quality structured data available, leaving us to develop or try to rely on more general models of discourse coherence.",
" Search cannot be used effectively here without constructing an appropriate query, or knowing in advance where plot information might be available. In a real-time system, live search may not be able to achieve the required speed and efficiency, so preprocessing or caching of relevant information may be necessary. "
]
}
],
"annotation_id": [
"124e995b04caa055ccba03e47ab8e7871cdd8af9"
],
"worker_id": [
"08f81a5d78e451df16193028defb70150c4201c9"
]
}
]
}
|
{
"caption": [
"Table 1: Sample Dialogue about Movies. System content indicated as coming from search† or structured data‡.",
"Table 2: Search and Structured Information Resources",
"Figure 1: Sample Available Resources for Query “Madison Bumgarner”",
"Table 4: Sample Dialogue about Comic Books. System content based on either search† or structured data‡.",
"Table 5: Sample Dialogue about Monsters . System content is curated based on search† or structured data‡.",
"Table 6: Using Structured Data vs Search"
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"4-Figure1-1.png",
"4-Table4-1.png",
"5-Table5-1.png",
"5-Table6-1.png"
]
}
|
1805.12032
|
Identifying and Understanding User Reactions to Deceptive and Trusted Social News Sources
|
In the age of social news, it is important to understand the types of reactions that are evoked from news sources with various levels of credibility. In the present work we seek to better understand how users react to trusted and deceptive news sources across two popular, and very different, social media platforms. To that end, (1) we develop a model to classify user reactions into one of nine types, such as answer, elaboration, and question, etc, and (2) we measure the speed and the type of reaction for trusted and deceptive news sources for 10.8M Twitter posts and 6.2M Reddit comments. We show that there are significant differences in the speed and the type of reactions between trusted and deceptive news sources on Twitter, but far smaller differences on Reddit.
|
{
"section_name": [
"Introduction",
"Reaction Type Classification",
"Reddit Data",
"Model",
"Reaction Type Classification Results",
"Measuring Reactions to Trusted and Deceptive News Sources",
"Twitter and Reddit News Data",
"Methodology",
"Results and Discussion",
"Related Work",
"Conclusion",
"Acknowledgments"
],
"paragraphs": [
[
"As the reliance on social media as a source of news increases and the reliability of sources is increasingly debated, it is important to understand how users react to various sources of news. Most studies that investigate misinformation spread in social media focus on individual events and the role of the network structure in the spread BIBREF0 , BIBREF1 , BIBREF2 or detection of false information BIBREF3 . These studies have found that the size and shape of misinformation cascades within a social network depends heavily on the initial reactions of the users. Other work has focused on the language of misinformation in social media BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 to detect types of deceptive news.",
"As an alternative to studying newsworthy events one at a time BIBREF10 , the current work applies linguistically-infused models to predict user reactions to deceptive and trusted news sources. Our analysis reveals differences in reaction types and speed across two social media platforms — Twitter and Reddit.",
"The first metric we report is the reaction type. Recent studies have found that 59% of bitly-URLs on Twitter are shared without ever being read BIBREF11 , and 73% of Reddit posts were voted on without reading the linked article BIBREF12 . Instead, users tend to rely on the commentary added to retweets or the comments section of Reddit-posts for information on the content and its credibility. Faced with this reality, we ask: what kind of reactions do users find when they browse sources of varying credibility? Discourse acts, or speech acts, can be used to identify the use of language within a conversation, e.g., agreement, question, or answer. Recent work by Zhang et al. zhang2017characterizing classified Reddit comments by their primary discourse act (e.g., question, agreement, humor), and further analyzed patterns from these discussions.",
"The second metric we report is reaction speed. A study by Jin et al. jin2013epidemiological found that trusted news stories spread faster than misinformation or rumor; Zeng et al. zeng2016rumors found that tweets which deny rumors had shorter delays than tweets of support. Our second goal is to determine if these trends are maintained for various types of news sources on Twitter and Reddit.",
"Hence, the contributions of this work are two-fold: (1) we develop a linguistically-infused neural network model to classify reactions in social media posts, and (2) we apply our model to label 10.8M Twitter posts and 6.2M Reddit comments in order to evaluate the speed and type of user reactions to various news sources."
],
[
"In this section, we describe our approach to classify user reactions into one of eight types of discourse: agreement, answer, appreciation, disagreement, elaboration, humor, negative reaction, or question, or as none of the given labels, which we call “other”, using linguistically-infused neural network models."
],
[
"We use a manually annotated Reddit dataset from Zhang et al. zhang2017characterizing to train our reaction classification model. Annotations from 25 crowd-workers labelled the primary discourse act for 101,525 comments within 9,131 comment threads on Reddit. The Reddit IDs, but not the text content of the comments themselves, were released with the annotations. So we collected the content of Reddit posts and comments from a public archive of Reddit posts and comments. Some content was deleted prior to archival, so the dataset shown in Table TABREF3 is a subset of the original content. Despite the inability to capture all of the original dataset, Table TABREF3 shows a similar distribution between our dataset and the original."
],
[
"We develop a neural network architecture that relies on content and other linguistic signals extracted from reactions and parent posts, and takes advantage of a “late fusion” approach previously used effectively in vision tasks BIBREF13 , BIBREF14 . More specifically, we combine a text sequence sub-network with a vector representation sub-network as shown in Figure FIGREF5 . The text sequence sub-network consists of an embedding layer initialized with 200-dimensional GloVe embeddings BIBREF15 followed by two 1-dimensional convolution layers, then a max-pooling layer followed by a dense layer. The vector representation sub-network consists of two dense layers. We incorporate information from both sub-networks through concatenated padded text sequences and vector representations of normalized Linguistic Inquiry and Word Count (LIWC) features BIBREF16 for the text of each post and its parent."
],
[
"As shown in Figure FIGREF7 , our linguistically-infused neural network model that relies solely on the content of the reaction and its parent has comparable performance to the more-complex CRF model by Zhang et al. zhang2017characterizing, which relies on content as well as additional metadata like the author, thread (e.g., the size of the the thread, the number of branches), structure (e.g., the position within the thread), and community (i.e., the subreddit in which the comment is posted)."
],
[
"In this section, we present key results of our analysis of how often and how quickly users react to content from sources of varying credibility using the reaction types predicted by our linguistically-infused neural network model."
],
[
"We focus on trusted news sources that provide factual information with no intent to deceive and deceptive news sources. Deceptive sources are ranked by their intent to deceive as follows: clickbait (attention-grabbing, misleading, or vague headlines to attract an audience), conspiracy theory (uncorroborated or unreliable information to explain events or circumstances), propaganda (intentionally misleading information to advance a social or political agenda), and disinformation (fabricated or factually incorrect information meant to intentionally deceive readers).",
"Trusted, clickbait, conspiracy, and propaganda sources were previously compiled by Volkova et al. volkova2017separating through a combination of crowd-sourcing and public resources. Trusted news sources with Twitter-verified accounts were manually labeled and clickbait, conspiracy, and propaganda news sources were collected from several public resources that annotate suspicious news accounts. We collected news sources identified as spreading disinformation by the European Union's East Strategic Communications Task Force from euvsdisinfo.eu. In total, there were 467 news sources: 251 trusted and 216 deceptive.",
"We collected reaction data for two popular platforms, Reddit and Twitter, using public APIs over the 13 month period from January 2016 through January 2017. For our Reddit dataset, we collected all Reddit posts submitted during the 13 month period that linked to domains associated with one of our labelled news sources. Then we collected all comments that directly responded to those posts. For our Twitter dataset, we collected all tweets posted in the 13 month period that explicitly @mentioned or directly retweeted content from a source and then assigned a label to each tweet based on the class of the source @mentioned or retweeted. A breakdown of each dataset by source type is shown in Table TABREF10 . Figure FIGREF11 illustrates the distribution of deceptive news sources and reactions across the four sub-categories of deceptive news sources. In our analysis, we consider the set of all deceptive sources and the set excluding the most extreme (disinformation)."
],
[
"We use the linguistically-infused neural network model from Figure FIGREF5 to label the reaction type of each tweet or comment. Using these labels, we examine how often response types occur when users react to each type of news source. For clarity, we report the five most frequently occurring reaction types (expressed in at least 5% of reactions within each source type) and compare the distributions of reaction types for each type of news source.",
"To examine whether users react to content from trusted sources differently than from deceptive sources, we measure the reaction delay, which we define as the time elapsed between the moment the link or content was posted/tweeted and the moment that the reaction comment or tweet occurred. We report the cumulative distribution functions (CDFs) for each source type and use Mann Whitney U (MWU) tests to compare whether users respond with a given reaction type with significantly different delays to news sources of different levels of credibility."
],
[
"For both Twitter and Reddit datasets, we found that the primary reaction types were answer, appreciation, elaboration, question, or “other” (no label was predicted). Figure FIGREF13 illustrates the distribution of reaction types among Reddit comments (top plot) or tweets (bottom plot) responding to each type of source, as a percentage of all comments/tweets reacting to sources of the given type (i.e., trusted, all deceptive, and deceptive excluding disinformation sources).",
"For Twitter, we report clear differences in user reactions to trusted vs. deceptive sources. Deceptive (including disinformation) sources have a much higher rate of appreciation reactions and a lower rate of elaboration responses, compared to trusted news sources. Differences are still significant ( INLINEFORM0 ) but the trends reverse if we do not include disinformation sources. We also see an increase in the rate of question-reactions compared to trusted news sources if we exclude disinformation sources.",
"For Reddit, there appears to be a very similar distribution across reaction types for trusted and deceptive sources. However, MWU tests still found that the differences between trusted and deceptive news sources were statistically significant ( INLINEFORM0 ) — regardless of whether we include or exclude disinformation sources. Posts that link to deceptive sources have higher rates of question, appreciation, and answering reactions, while posts that link to trusted sources have higher rates of elaboration, agreement, and disagreement.",
"Next, we compared the speed with which users reacted to posts of sources of varying credibility. Our original hypothesis was that users react to posts of trusted sources faster than posts of deceptive sources. The CDFs for each source type and platform (solid and dashed lines represent Reddit and Twitter respectively) are shown in Figure FIGREF14 . We observe that the lifetime of direct reactions to news sources on Twitter is often more extended than for sources on Reddit. One exception is answer reactions which almost always occur within the first hour after the Twitter new source originally posted the tweet being answered. This may be due to the different ways that users consume content on the two platforms. Users follow accounts on Twitter, whereas on Reddit users “follow” topics through their subscriptions to various subreddits. Users can view the news feeds of individual sources on Twitter and view all of the sources' posts. Reddit, on the other hand, is not designed to highlight individual users or news sources; instead new posts (regardless of the source) are viewed based on their hotness score within each subreddit.",
"In addition, we observe that reactions to posts linked to trusted sources are less heavily concentrated within the first 12 to 15 hours of the post's lifetime on Reddit. The opposite is found on Twitter. Twitter sources may have a larger range of reaction delays, but they are also more heavily concentrated in the lower end of that range ( INLINEFORM0 )."
],
[
"As we noted above, most studies that examine misinformation spread focus on individual events such as natural disasters BIBREF17 , political elections BIBREF18 , or crises BIBREF19 and examine the response to the event on social media. A recent study by Vosoughi et al. vosoughi2018spread found that news stories that were fact-checked and found to be false spread faster and to more people than news items found to be true. In contrast, our methodology considers immediate reactions to news sources of varying credibility, so we can determine whether certain reactions or reactions to trusted or deceptive news sources evoke more or faster responses from social media users."
],
[
"In the current work, we have presented a content-based model that classifies user reactions into one of nine types, such as answer, elaboration, and question, etc., and a large-scale analysis of Twitter posts and Reddit comments in response to content from news sources of varying credibility.",
"Our analysis of user reactions to trusted and deceptive sources on Twitter and Reddit shows significant differences in the distribution of reaction types for trusted versus deceptive news. However, due to differences in the user interface, algorithmic design, or user-base, we find that Twitter users react to trusted and deceptive sources very differently than Reddit users. For instance, Twitter users questioned disinformation sources less often and more slowly than they did trusted news sources; Twitter users also expressed appreciation towards disinformation sources more often and faster than towards trusted sources. Results from Reddit show similar, but far less pronounced, reaction results.",
"Future work may focus on analysis of reaction behavior from automated (i.e., 'bot'), individual, or organization accounts; on additional social media platforms and languages; or between more fine-grained categories of news source credibility."
],
[
"The research described in this paper is based on Twitter and Reddit data collected by the University of Notre Dame using public APIs. The research was supported by the Laboratory Directed Research and Development Program at Pacific Northwest National Laboratory, a multiprogram national laboratory operated by Battelle for the U.S. Department of Energy. This research is also supported by the Defense Advanced Research Projects Agency (DARPA), contract W911NF-17-C-0094. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government."
]
]
}
|
{
"question": [
"How is speed measured?",
"What is the architecture of their model?",
"What are the nine types?"
],
"question_id": [
"b54fc86dc2cc6994e10c1819b6405de08c496c7b",
"b43a8a0f4b8496b23c89730f0070172cd5dca06a",
"b161febf86cdd58bd247a934120410068b24b7d1"
],
"nlp_background": [
"",
"",
""
],
"topic_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"search_query": [
"",
"",
""
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"time elapsed between the moment the link or content was posted/tweeted and the moment that the reaction comment or tweet occurred"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The first metric we report is the reaction type. Recent studies have found that 59% of bitly-URLs on Twitter are shared without ever being read BIBREF11 , and 73% of Reddit posts were voted on without reading the linked article BIBREF12 . Instead, users tend to rely on the commentary added to retweets or the comments section of Reddit-posts for information on the content and its credibility. Faced with this reality, we ask: what kind of reactions do users find when they browse sources of varying credibility? Discourse acts, or speech acts, can be used to identify the use of language within a conversation, e.g., agreement, question, or answer. Recent work by Zhang et al. zhang2017characterizing classified Reddit comments by their primary discourse act (e.g., question, agreement, humor), and further analyzed patterns from these discussions.",
"The second metric we report is reaction speed. A study by Jin et al. jin2013epidemiological found that trusted news stories spread faster than misinformation or rumor; Zeng et al. zeng2016rumors found that tweets which deny rumors had shorter delays than tweets of support. Our second goal is to determine if these trends are maintained for various types of news sources on Twitter and Reddit.",
"To examine whether users react to content from trusted sources differently than from deceptive sources, we measure the reaction delay, which we define as the time elapsed between the moment the link or content was posted/tweeted and the moment that the reaction comment or tweet occurred. We report the cumulative distribution functions (CDFs) for each source type and use Mann Whitney U (MWU) tests to compare whether users respond with a given reaction type with significantly different delays to news sources of different levels of credibility."
],
"highlighted_evidence": [
"The first metric we report is the reaction type.",
"The second metric we report is reaction speed. A study by Jin et al. jin2013epidemiological found that trusted news stories spread faster than misinformation or rumor; Zeng et al. zeng2016rumors found that tweets which deny rumors had shorter delays than tweets of support. Our second goal is to determine if these trends are maintained for various types of news sources on Twitter and Reddit.",
"To examine whether users react to content from trusted sources differently than from deceptive sources, we measure the reaction delay, which we define as the time elapsed between the moment the link or content was posted/tweeted and the moment that the reaction comment or tweet occurred. We report the cumulative distribution functions (CDFs) for each source type and use Mann Whitney U (MWU) tests to compare whether users respond with a given reaction type with significantly different delays to news sources of different levels of credibility."
]
}
],
"annotation_id": [
"1253580ddca3f5c80fad5ae7d5499d6e925817e4"
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"we combine a text sequence sub-network with a vector representation sub-network as shown in Figure FIGREF5 . The text sequence sub-network consists of an embedding layer initialized with 200-dimensional GloVe embeddings BIBREF15 followed by two 1-dimensional convolution layers, then a max-pooling layer followed by a dense layer. The vector representation sub-network consists of two dense layers. We incorporate information from both sub-networks through concatenated padded text sequences and vector representations of normalized Linguistic Inquiry and Word Count (LIWC) features BIBREF16 for the text of each post and its parent."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Hence, the contributions of this work are two-fold: (1) we develop a linguistically-infused neural network model to classify reactions in social media posts, and (2) we apply our model to label 10.8M Twitter posts and 6.2M Reddit comments in order to evaluate the speed and type of user reactions to various news sources.",
"We develop a neural network architecture that relies on content and other linguistic signals extracted from reactions and parent posts, and takes advantage of a “late fusion” approach previously used effectively in vision tasks BIBREF13 , BIBREF14 . More specifically, we combine a text sequence sub-network with a vector representation sub-network as shown in Figure FIGREF5 . The text sequence sub-network consists of an embedding layer initialized with 200-dimensional GloVe embeddings BIBREF15 followed by two 1-dimensional convolution layers, then a max-pooling layer followed by a dense layer. The vector representation sub-network consists of two dense layers. We incorporate information from both sub-networks through concatenated padded text sequences and vector representations of normalized Linguistic Inquiry and Word Count (LIWC) features BIBREF16 for the text of each post and its parent."
],
"highlighted_evidence": [
"Hence, the contributions of this work are two-fold: (1) we develop a linguistically-infused neural network model to classify reactions in social media posts, and (2) we apply our model to label 10.8M Twitter posts and 6.2M Reddit comments in order to evaluate the speed and type of user reactions to various news sources.",
"We develop a neural network architecture that relies on content and other linguistic signals extracted from reactions and parent posts, and takes advantage of a “late fusion” approach previously used effectively in vision tasks BIBREF13 , BIBREF14 . More specifically, we combine a text sequence sub-network with a vector representation sub-network as shown in Figure FIGREF5 . The text sequence sub-network consists of an embedding layer initialized with 200-dimensional GloVe embeddings BIBREF15 followed by two 1-dimensional convolution layers, then a max-pooling layer followed by a dense layer. The vector representation sub-network consists of two dense layers. We incorporate information from both sub-networks through concatenated padded text sequences and vector representations of normalized Linguistic Inquiry and Word Count (LIWC) features BIBREF16 for the text of each post and its parent."
]
}
],
"annotation_id": [
"12715a92fe478e5dc21809d69376576407202018"
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"agreement",
"answer",
"appreciation",
"disagreement",
"elaboration",
"humor",
"negative reaction",
"question",
"other"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In this section, we describe our approach to classify user reactions into one of eight types of discourse: agreement, answer, appreciation, disagreement, elaboration, humor, negative reaction, or question, or as none of the given labels, which we call “other”, using linguistically-infused neural network models."
],
"highlighted_evidence": [
"\n",
"In this section, we describe our approach to classify user reactions into one of eight types of discourse: agreement, answer, appreciation, disagreement, elaboration, humor, negative reaction, or question, or as none of the given labels, which we call “other”, using linguistically-infused neural network models."
]
}
],
"annotation_id": [
"cc1f08762ac577fbe2edb092b9769ae0da03c409"
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
}
]
}
|
{
"caption": [
"Figure 1: Architecture of neural network model used to predict reaction types.",
"Table 1: Summary of the training data we recovered compared to the data collected by Zhang et al. (2017) reported as distributions of comments across reaction types.",
"Figure 2: Comparison of our model’s performance, measured using F1 score, trained only on content features, with the performance reported by Zhang et al. (2017) trained on content, author, thread, structure, and community features.",
"Table 2: Summary of Twitter and Reddit datasets used to measure the speed and types of reactions to Trusted and Deceptive news sources excluding (no disinformation) or including (All) the most extreme of the deceptive sources — those identified as spreading disinformation.",
"Figure 3: Distributions of Deceptive news sources and reactions to those sources (Reddit comments or tweets, respectively) for the Reddit and Twitter datasets across the four subcategories of deceptive news sources.",
"Figure 4: Distributions of five most frequently occurring reaction types within comments on Reddit and tweets on Twitter for each news source type (MWU p < 0.01).",
"Figure 5: CDF plots of the volumes of reactions by reaction delays for the frequently occurring reactions (i.e., , reactions that occur in at least 5% of comments) for each source-type, using a step size of one hour. The CDF for Elaboration-reactions to Deceptive (no disinformation) Twitter news sources is occluded by the CDF for Deceptive Twitter news sources. This figure is best viewed in color."
],
"file": [
"2-Figure1-1.png",
"2-Table1-1.png",
"2-Figure2-1.png",
"3-Table2-1.png",
"3-Figure3-1.png",
"4-Figure4-1.png",
"4-Figure5-1.png"
]
}
|
1611.02550
|
Discriminative Acoustic Word Embeddings: Recurrent Neural Network-Based Approaches
|
Acoustic word embeddings --- fixed-dimensional vector representations of variable-length spoken word segments --- have begun to be considered for tasks such as speech recognition and query-by-example search. Such embeddings can be learned discriminatively so that they are similar for speech segments corresponding to the same word, while being dissimilar for segments corresponding to different words. Recent work has found that acoustic word embeddings can outperform dynamic time warping on query-by-example search and related word discrimination tasks. However, the space of embedding models and training approaches is still relatively unexplored. In this paper we present new discriminative embedding models based on recurrent neural networks (RNNs). We consider training losses that have been successful in prior work, in particular a cross entropy loss for word classification and a contrastive loss that explicitly aims to separate same-word and different-word pairs in a"Siamese network"training setting. We find that both classifier-based and Siamese RNN embeddings improve over previously reported results on a word discrimination task, with Siamese RNNs outperforming classification models. In addition, we present analyses of the learned embeddings and the effects of variables such as dimensionality and network structure.
|
{
"section_name": [
"Introduction",
"Related work",
"Approach",
"Training",
"EXPERIMENTS",
"Classification network details",
"Siamese network details",
"Results",
"Effect of model structure",
"Effect of embedding dimensionality",
"Effect of training vocabulary",
"Visualization of embeddings",
"Conclusion"
],
"paragraphs": [
[
"Many speech processing tasks – such as automatic speech recognition or spoken term detection – hinge on associating segments of speech signals with word labels. In most systems developed for such tasks, words are broken down into sub-word units such as phones, and models are built for the individual units. An alternative, which has been considered by some researchers, is to consider each entire word segment as a single unit, without assigning parts of it to sub-word units. One motivation for the use of whole-word approaches is that they avoid the need for sub-word models. This is helpful since, despite decades of work on sub-word modeling BIBREF0 , BIBREF1 , it still poses significant challenges. For example, speech processing systems are still hampered by differences in conversational pronunciations BIBREF2 . A second motivation is that considering whole words at once allows us to consider a more flexible set of features and reason over longer time spans.",
"Whole-word approaches typically involve, at some level, template matching. For example, in template-based speech recognition BIBREF3 , BIBREF4 , word scores are computed from dynamic time warping (DTW) distances between an observed segment and training segments of the hypothesized word. In query-by-example search, putative matches are typically found by measuring the DTW distance between the query and segments of the search database BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . In other words, whole-word approaches often boil down to making decisions about whether two segments are examples of the same word or not.",
"An alternative to DTW that has begun to be explored is the use of acoustic word embeddings (AWEs), or vector representations of spoken word segments. AWEs are representations that can be learned from data, ideally such that the embeddings of two segments corresponding to the same word are close, while embeddings of segments corresponding to different words are far apart. Once word segments are represented via fixed-dimensional embeddings, computing distances is as simple as measuring a cosine or Euclidean distance between two vectors.",
"There has been some, thus far limited, work on acoustic word embeddings, focused on a number of embedding models, training approaches, and tasks BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 . In this paper we explore new embedding models based on recurrent neural networks (RNNs), applied to a word discrimination task related to query-by-example search. RNNs are a natural model class for acoustic word embeddings, since they can handle arbitrary-length sequences. We compare several types of RNN-based embeddings and analyze their properties. Compared to prior embeddings tested on the same task, our best models achieve sizable improvements in average precision."
],
[
"We next briefly describe the most closely related prior work.",
"Maas et al. BIBREF9 and Bengio and Heigold BIBREF10 used acoustic word embeddings, based on convolutional neural networks (CNNs), to generate scores for word segments in automatic speech recognition. Maas et al. trained CNNs to predict (continuous-valued) embeddings of the word labels, and used the resulting embeddings to define feature functions in a segmental conditional random field BIBREF17 rescoring system. Bengio and Heigold also developed CNN-based embeddings for lattice rescoring, but with a contrastive loss to separate embeddings of a given word from embeddings of other words.",
"Levin et al. BIBREF11 developed unsupervised embeddings based on representing each word as a vector of DTW distances to a collection of reference word segments. This representation was subsequently used in several applications: a segmental approach for query-by-example search BIBREF12 , lexical clustering BIBREF18 , and unsupervised speech recognition BIBREF19 . Voinea et al. BIBREF15 developed a representation also based on templates, in their case phone templates, designed to be invariant to specific transformations, and showed their robustness on digit classification.",
"Kamper et al. BIBREF13 compared several types of acoustic word embeddings for a word discrimination task related to query-by-example search, finding that embeddings based on convolutional neural networks (CNNs) trained with a contrastive loss outperformed the reference vector approach of Levin et al. BIBREF11 as well as several other CNN and DNN embeddings and DTW using several feature types. There have now been a number of approaches compared on this same task and data BIBREF11 , BIBREF20 , BIBREF21 , BIBREF22 . For a direct comparison with this prior work, in this paper we use the same task and some of the same training losses as Kamper et al., but develop new embedding models based on RNNs.",
"The only prior work of which we are aware using RNNs for acoustic word embeddings is that of Chen et al. BIBREF16 and Chung et al. BIBREF14 . Chen et al. learned a long short-term memory (LSTM) RNN for word classification and used the resulting hidden state vectors as a word embedding in a query-by-example task. The setting was quite specific, however, with a small number of queries and speaker-dependent training. Chung et al. BIBREF14 worked in an unsupervised setting and trained single-layer RNN autoencoders to produce embeddings for a word discrimination task. In this paper we focus on the supervised setting, and compare a variety of RNN-based structures trained with different losses.",
""
],
[
"",
"An acoustic word embedding is a function that takes as input a speech segment corresponding to a word, INLINEFORM0 , where each INLINEFORM1 is a vector of frame-level acoustic features, and outputs a fixed-dimensional vector representing the segment, INLINEFORM2 . The basic embedding model structure we use is shown in Fig. FIGREF1 . The model consists of a deep RNN with some number INLINEFORM3 of stacked layers, whose final hidden state vector is passed as input to a set of INLINEFORM4 of fully connected layers; the output of the final fully connected layer is the embedding INLINEFORM5 .",
"The RNN hidden state at each time frame can be viewed as a representation of the input seen thus far, and its value in the last time frame INLINEFORM0 could itself serve as the final word embedding. The fully connected layers are added to account for the fact that some additional transformation may improve the representation. For example, the hidden state may need to be larger than the desired word embedding dimension, in order to be able to \"remember\" all of the needed intermediate information. Some of that information may not be needed in the final embedding. In addition, the information maintained in the hidden state may not necessarily be discriminative; some additional linear or non-linear transformation may help to learn a discriminative embedding.",
"Within this class of embedding models, we focus on Long Short-Term Memory (LSTM) networks BIBREF23 and Gated Recurrent Unit (GRU) networks BIBREF24 . These are both types of RNNs that include a mechanism for selectively retaining or discarding information at each time frame when updating the hidden state, in order to better utilize long-term context. Both of these RNN variants have been used successfully in speech recognition BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 .",
"In an LSTM RNN, at each time frame both the hidden state INLINEFORM0 and an associated “cell memory\" vector INLINEFORM1 , are updated and passed on to the next time frame. In other words, each forward edge in Figure FIGREF1 can be viewed as carrying both the cell memory and hidden state vectors. The updates are modulated by the values of several gating vectors, which control the degree to which the cell memory and hidden state are updated in light of new information in the current frame. For a single-layer LSTM network, the updates are as follows:",
" INLINEFORM0 ",
"where INLINEFORM0 , and INLINEFORM1 are all vectors of the same dimensionality, INLINEFORM2 , and INLINEFORM3 are learned weight matrices of the appropriate sizes, INLINEFORM4 and INLINEFORM5 are learned bias vectors, INLINEFORM6 is a componentwise logistic activation, and INLINEFORM7 refers to the Hadamard (componentwise) product.",
"Similarly, in a GRU network, at each time step a GRU cell determines what components of old information are retained, overwritten, or modified in light of the next step in the input sequence. The output from a GRU cell is only the hidden state vector. A GRU cell uses a reset gate INLINEFORM0 and an update gate INLINEFORM1 as described below for a single-layer network: INLINEFORM2 ",
"where INLINEFORM0 , and INLINEFORM1 are all the same dimensionality, INLINEFORM2 , and INLINEFORM3 are learned weight matrices of the appropriate size, and INLINEFORM4 , INLINEFORM5 and INLINEFORM6 are learned bias vectors.",
"All of the above equations refer to single-layer networks. In a deep network, with multiple stacked layers, the same update equations are used in each layer, with the state, cell, and gate vectors replaced by layer-specific vectors INLINEFORM0 and so on for layer INLINEFORM1 . For all but the first layer, the input INLINEFORM2 is replaced by the hidden state vector from the previous layer INLINEFORM3 .",
"For the fully connected layers, we use rectified linear unit (ReLU) BIBREF29 activation, except for the final layer which depends on the form of supervision and loss used in training.",
""
],
[
"We train the RNN-based embedding models using a set of pre-segmented spoken words. We use two main training approaches, inspired by prior work but with some differences in the details. As in BIBREF13 , BIBREF10 , our first approach is to use the word labels of the training segments and train the networks to classify the word. In this case, the final layer of INLINEFORM0 is a log-softmax layer. Here we are limited to the subset of the training set that has a sufficient number of segments per word to train a good classifier, and the output dimensionality is equal to the number of words (but see BIBREF13 for a study of varying the dimensionality in such a classifier-based embedding model by introducing a bottleneck layer). This model is trained end-to-end and is optimized with a cross entropy loss. Although labeled data is necessarily limited, the hope is that the learned models will be useful even when applied to spoken examples of words not previously seen in the training data. For words not seen in training, the embeddings should correspond to some measure of similarity of the word to the training words, measured via the posterior probabilities of the previously seen words. In the experiments below, we examine this assumption by analyzing performance on words that appear in the training data compared to those that do not.",
"The second training approach, based on earlier work of Kamper et al. BIBREF13 , is to train \"Siamese\" networks BIBREF30 . In this approach, full supervision is not needed; rather, we use weak supervision in the form of pairs of segments labeled as same or different. The base model remains the same as before—an RNN followed by a set of fully connected layers—but the final layer is no longer a softmax but rather a linear activation layer of arbitrary size. In order to learn the parameters, we simultaneously feed three word segments through three copies of our model (i.e. three networks with shared weights). One input segment is an “anchor\", INLINEFORM0 , the second is another segment with the same word label, INLINEFORM1 , and the third is a segment corresponding to a different word label, INLINEFORM2 . Then, the network is trained using a “cos-hinge\" loss:",
" DISPLAYFORM0 ",
"where INLINEFORM0 is the cosine distance between INLINEFORM1 . Unlike cross entropy training, here we directly aim to optimize relative (cosine) distance between same and different word pairs. For tasks such as query-by-example search, this training loss better respects our end objective, and can use more data since neither fully labeled data nor any minimum number of examples of each word should be needed.",
""
],
[
"",
"Our end goal is to improve performance on downstream tasks requiring accurate word discrimination. In this paper we use an intermediate task that more directly tests whether same- and different-word pairs have the expected relationship. and that allows us to compare to a variety of prior work. Specifically, we use the word discrimination task of Carlin et al. BIBREF20 , which is similar to a query-by-example task where the word segmentations are known. The evaluation consists of determining, for each pair of evaluation segments, whether they are examples of the same or different words, and measuring performance via the average precision (AP). We do this by measuring the cosine similarity between their acoustic word embeddings and declaring them to be the same if the distance is below a threshold. By sweeping the threshold, we obtain a precision-recall curve from which we compute the AP.",
"The data used for this task is drawn from the Switchboard conversational English corpus BIBREF31 . The word segments range from 50 to 200 frames in length. The acoustic features in each frame (the input to the word embedding models INLINEFORM0 ) are 39-dimensional MFCCs+ INLINEFORM1 + INLINEFORM2 . We use the same train, development, and test partitions as in prior work BIBREF13 , BIBREF11 , and the same acoustic features as in BIBREF13 , for as direct a comparison as possible. The train set contains approximately 10k example segments, while dev and test each contain approximately 11k segments (corresponding to about 60M pairs for computing the dev/test AP). As in BIBREF13 , when training the classification-based embeddings, we use a subset of the training set containing all word types with a minimum of 3 occurrences, reducing the training set size to approximately 9k segments.",
"When training the Siamese networks, the training data consists of all of the same-word pairs in the full training set (approximately 100k pairs). For each such training pair, we randomly sample a third example belonging to a different word type, as required for the INLINEFORM0 loss.",
""
],
[
"Our classifier-based embeddings use LSTM or GRU networks with 2–4 stacked layers and 1–3 fully connected layers. The final embedding dimensionality is equal to the number of unique word labels in the training set, which is 1061. The recurrent hidden state dimensionality is fixed at 512 and dropout BIBREF32 between stacked recurrent layers is used with probability INLINEFORM0 . The fully connected hidden layer dimensionality is fixed at 1024. Rectified linear unit (ReLU) non-linearities and dropout with INLINEFORM1 are used between fully-connected layers. However, between the final recurrent hidden state output and the first fully-connected layer no non-linearity or dropout is applied. These settings were determined through experiments on the development set.",
"The classifier network is trained with a cross entropy loss and optimized using stochastic gradient descent (SGD) with Nesterov momentum BIBREF33 . The learning rate is initialized at 0.1 and is reduced by a factor of 10 according to the following heuristic: If 99% of the current epoch's average batch loss is greater than the running average of batch losses over the last 3 epochs, this is considered a plateau; if there are 3 consecutive plateau epochs, then the learning rate is reduced. Training stops when reducing the learning rate no longer improves dev set AP. Then, the model from the epoch corresponding to the the best dev set AP is chosen. Several other optimizers—Adagrad BIBREF34 , Adadelta BIBREF35 , and Adam BIBREF36 —were explored in initial experiments on the dev set, but all reported results were obtained using SGD with Nesterov momentum.",
""
],
[
"For experiments with Siamese networks, we initialize (warm-start) the networks with the tuned classification network, removing the final log-softmax layer and replacing it with a linear layer of size equal to the desired embedding dimensionality. We explored embeddings with dimensionalities between 8 and 2048. We use a margin of 0.4 in the cos-hinge loss.",
"In training the Siamese networks, each training mini-batch consists of INLINEFORM0 triplets. INLINEFORM1 triplets are of the form INLINEFORM2 where INLINEFORM3 and INLINEFORM4 are examples of the same class (a pair from the 100k same-word pair set) and INLINEFORM5 is a randomly sampled example from a different class. Then, for each of these INLINEFORM6 triplets INLINEFORM7 , an additional triplet INLINEFORM8 is added to the mini-batch to allow all segments to serve as anchors. This is a slight departure from earlier work BIBREF13 , which we found to improve stability in training and performance on the development set.",
"In preliminary experiments, we compared two methods for choosing the negative examples INLINEFORM0 during training, a uniform sampling approach and a non-uniform one. In the case of uniform sampling, we sample INLINEFORM1 uniformly at random from the full set of training examples with labels different from INLINEFORM2 . This sampling method requires only word-pair supervision. In the case of non-uniform sampling, INLINEFORM3 is sampled in two steps. First, we construct a distribution INLINEFORM4 over word labels INLINEFORM5 and sample a different label from it. Second, we sample an example uniformly from within the subset with the chosen label. The goal of this method is to speed up training by targeting pairs that violate the margin constraint. To construct the multinomial PMF INLINEFORM6 , we maintain an INLINEFORM7 matrix INLINEFORM8 , where INLINEFORM9 is the number of unique word labels in training. Each word label corresponds to an integer INLINEFORM10 INLINEFORM11 [1, INLINEFORM12 ] and therefore a row in INLINEFORM13 . The values in a row of INLINEFORM14 are considered similarity scores, and we can retrieve the desired PMF for each row by normalizing by its sum.",
"At the start of each epoch, we initialize INLINEFORM0 with 0's along the diagonal and 1's elsewhere (which reduces to uniform sampling). For each training pair INLINEFORM1 , we update INLINEFORM2 for both INLINEFORM3 and INLINEFORM4 :",
" INLINEFORM0 ",
"The PMFs INLINEFORM0 are updated after the forward pass of an entire mini-batch. The constant INLINEFORM1 enforces a potentially stronger constraint than is used in the INLINEFORM2 loss, in order to promote diverse sampling. In all experiments, we set INLINEFORM3 . This is a heuristic approach, and it would be interesting to consider various alternatives. Preliminary experiments showed that the non-uniform sampling method outperformed uniform sampling, and in the following we report results with non-uniform sampling.",
"We optimize the Siamese network model using SGD with Nesterov momentum for 15 epochs. The learning rate is initialized to 0.001 and dropped every 3 epochs until no improvement is seen on the dev set. The final model is taken from the epoch with the highest dev set AP. All models were implemented in Torch BIBREF37 and used the rnn library of BIBREF38 .",
""
],
[
" Based on development set results, our final embedding models are LSTM networks with 3 stacked layers and 3 fully connected layers, with output dimensionality of 1024 in the case of Siamese networks. Final test set results are given in Table TABREF7 . We include a comparison with the best prior results on this task from BIBREF13 , as well as the result of using standard DTW on the input MFCCs (reproduced from BIBREF13 ) and the best prior result using DTW, obtained with frame features learned with correlated autoencoders BIBREF21 . Both classifier and Siamese LSTM embedding models outperform all prior results on this task of which we are aware.",
"We next analyze the effects of model design choices, as well as the learned embeddings themselves.",
""
],
[
"Table TABREF10 shows the effect on development set performance of the number of stacked layers INLINEFORM0 , the number of fully connected layers INLINEFORM1 , and LSTM vs. GRU cells, for classifier-based embeddings. The best performance in this experiment is achieved by the LSTM network with INLINEFORM2 . However, performance still seems to be improving with additional layers, suggesting that we may be able to further improve performance by adding even more layers of either type. However, we fixed the model to INLINEFORM3 in order to allow for more experimentation and analysis within a reasonable time.",
"Table TABREF10 reveals an interesting trend. When only one fully connected layer is used, the GRU networks outperform the LSTMs given a sufficient number of stacked layers. On the other hand, once we add more fully connected layers, the LSTMs outperform the GRUs. In the first few lines of Table TABREF10 , we use 2, 3, and 4 layer stacks of LSTMs and GRUs while holding fixed the number of fully-connected layers at INLINEFORM0 . There is clear utility in stacking additional layers; however, even with 4 stacked layers the RNNs still underperform the CNN-based embeddings of BIBREF13 until we begin adding fully connected layers.",
"After exploring a variety of stacked RNNs, we fixed the stack to 3 layers and varied the number of fully connected layers. The value of each additional fully connected layer is clearly greater than that of adding stacked layers. All networks trained with 2 or 3 fully connected layers obtain more than 0.4 AP on the development set, while stacked RNNs with 1 fully connected layer are at around 0.3 AP or less. This may raise the question of whether some simple fully connected model may be all that is needed; however, previous work has shown that this approach is not competitive BIBREF13 , and convolutional or recurrent layers are needed to summarize arbitrary-length segments into a fixed-dimensional representation.",
""
],
[
"For the Siamese networks, we varied the output embedding dimensionality, as shown in Fig. FIGREF11 . This analysis shows that the embeddings learned by the Siamese RNN network are quite robust to reduced dimensionality, outperforming the classifier model for all dimensionalities 32 or higher and outperforming previously reported dev set performance with CNN-based embeddings BIBREF13 for all dimensionalities INLINEFORM0 .",
""
],
[
"We might expect the learned embeddings to be more accurate for words that are seen in training than for ones that are not. Fig. FIGREF11 measures this effect by showing performance as a function of the number of occurrences of the dev words in the training set. Indeed, both model types are much more successful for in-vocabulary words, and their performance improves the higher the training frequency of the words. However, performance increases more quickly for the Siamese network than for the classifier as training frequency increases. This may be due to the fact that, if a word type occurs at least INLINEFORM0 times in the classifier training set, then it occurs at least INLINEFORM1 times in the Siamese paired training data.",
""
],
[
"In order to gain a better qualitative understanding of the differences between clasiffier and Siamese-based embeddings, and of the learned embedding space more generally, we plot a two-dimensional visualization of some of our learned embeddings via t-SNE BIBREF40 in Fig. FIGREF12 . For both classifier and Siamese embeddings, there is a marked difference in the quality of clusters formed by embeddings of words that were previously seen vs. previously unseen in training. However, the Siamese network embeddings appear to have better relative distances between word clusters with similar and dissimilar pronunciations. For example, the word programs appears equidistant from problems and problem in the classifier-based embedding space, but in the Siamese embedding space problems falls between problem and programs. Similarly, the cluster for democracy shifts with respect to actually and especially to better respect differences in pronunciation. More study of learned embeddings, using more data and word types, is needed to confirm such patterns in general. Improvements in unseen word embeddings from the classifier embedding space to the Siamese embedding space (such as for democracy, morning, and basketball) are a likely result of optimizing the model for relative distances between words.",
""
],
[
"",
"Our main finding is that RNN-based acoustic word embeddings outperform prior approaches, as measured via a word discrimination task related to query-by-example search. Our best results are obtained with deep LSTM RNNs with a combination of several stacked layers and several fully connected layers, optimized with a contrastive Siamese loss. Siamese networks have the benefit that, for any given training data set, they are effectively trained on a much larger set, in the sense that they measure a loss and gradient for every possible pair of data points. Our experiments suggest that the models could still be improved with additional layers. In addition, we have found that, for the purposes of acoustic word embeddings, fully connected layers are very important and have a more significant effect per layer than stacked layers, particularly when trained with the cross entropy loss function.",
"These experiments represent an initial exploration of sequential neural models for acoustic word embeddings. There are a number of directions for further work. For example, while our analyses suggest that Siamese networks are better than classifier-based models at embedding previously unseen words, our best embeddings are still much poorer for unseen words. Improvements in this direction may come from larger training sets, or may require new models that better model the shared structure between words. Other directions for future work include additional forms of supervision and training, as well as application to downstream tasks."
]
]
}
|
{
"question": [
"How do they represent input features of their model to train embeddings?",
"Which dimensionality do they use for their embeddings?",
"Which dataset do they use?",
"By how much do they outpeform previous results on the word discrimination task?"
],
"question_id": [
"d40662236eed26f17dd2a3a9052a4cee1482d7d6",
"1d791713d1aa77358f11501f05c108045f53c8aa",
"6b6360fab2edc836901195c0aba973eae4891975",
"b6b5f92a1d9fa623b25c70c1ac67d59d84d9eec8"
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
""
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"a vector of frame-level acoustic features"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"An acoustic word embedding is a function that takes as input a speech segment corresponding to a word, INLINEFORM0 , where each INLINEFORM1 is a vector of frame-level acoustic features, and outputs a fixed-dimensional vector representing the segment, INLINEFORM2 . The basic embedding model structure we use is shown in Fig. FIGREF1 . The model consists of a deep RNN with some number INLINEFORM3 of stacked layers, whose final hidden state vector is passed as input to a set of INLINEFORM4 of fully connected layers; the output of the final fully connected layer is the embedding INLINEFORM5 ."
],
"highlighted_evidence": [
"An acoustic word embedding is a function that takes as input a speech segment corresponding to a word, INLINEFORM0 , where each INLINEFORM1 is a vector of frame-level acoustic features, and outputs a fixed-dimensional vector representing the segment, INLINEFORM2 ."
]
}
],
"annotation_id": [
"1fd4f3fbe7b6046c29581d726d5cfe3e080fd7c8"
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"1061"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Our classifier-based embeddings use LSTM or GRU networks with 2–4 stacked layers and 1–3 fully connected layers. The final embedding dimensionality is equal to the number of unique word labels in the training set, which is 1061. The recurrent hidden state dimensionality is fixed at 512 and dropout BIBREF32 between stacked recurrent layers is used with probability INLINEFORM0 . The fully connected hidden layer dimensionality is fixed at 1024. Rectified linear unit (ReLU) non-linearities and dropout with INLINEFORM1 are used between fully-connected layers. However, between the final recurrent hidden state output and the first fully-connected layer no non-linearity or dropout is applied. These settings were determined through experiments on the development set."
],
"highlighted_evidence": [
"The final embedding dimensionality is equal to the number of unique word labels in the training set, which is 1061."
]
}
],
"annotation_id": [
"1296db0535d800668b7dfc49d903edf11643d543"
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Switchboard conversational English corpus"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The data used for this task is drawn from the Switchboard conversational English corpus BIBREF31 . The word segments range from 50 to 200 frames in length. The acoustic features in each frame (the input to the word embedding models INLINEFORM0 ) are 39-dimensional MFCCs+ INLINEFORM1 + INLINEFORM2 . We use the same train, development, and test partitions as in prior work BIBREF13 , BIBREF11 , and the same acoustic features as in BIBREF13 , for as direct a comparison as possible. The train set contains approximately 10k example segments, while dev and test each contain approximately 11k segments (corresponding to about 60M pairs for computing the dev/test AP). As in BIBREF13 , when training the classification-based embeddings, we use a subset of the training set containing all word types with a minimum of 3 occurrences, reducing the training set size to approximately 9k segments."
],
"highlighted_evidence": [
"The data used for this task is drawn from the Switchboard conversational English corpus BIBREF31 ."
]
}
],
"annotation_id": [
"2aa70ad856356c985fd3ab88b850c08da935d830"
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Their best average precision tops previous best result by 0.202",
"evidence": [
"FLOAT SELECTED: Table 1: Final test set results in terms of average precision (AP). Dimensionalities marked with * refer to dimensionality per frame for DTW-based approaches. For CNN and LSTM models, results are given as means over several training runs (5 and 10, respectively) along with their standard deviations."
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Final test set results in terms of average precision (AP). Dimensionalities marked with * refer to dimensionality per frame for DTW-based approaches. For CNN and LSTM models, results are given as means over several training runs (5 and 10, respectively) along with their standard deviations."
]
}
],
"annotation_id": [
"e29d3437584259c203f003372b6df706a73753c3"
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
]
}
|
{
"caption": [
"Fig. 1: LSTM-based acoustic word embedding model. For GRUbased models, the structure is the same, but the LSTM cells are replaced with GRU cells, and there is no cell activation vector; the recurrent connections only carry the hidden state vector hlt.",
"Fig. 2: Effect of embedding dimensionality (left) and occurrences in training set (right).",
"Table 1: Final test set results in terms of average precision (AP). Dimensionalities marked with * refer to dimensionality per frame for DTW-based approaches. For CNN and LSTM models, results are given as means over several training runs (5 and 10, respectively) along with their standard deviations.",
"Table 2: Average precision on the dev set, using classifier-based embeddings. S = # stacked layers, F = # fully connected layers.",
"Fig. 3: t-SNE visualization of word embeddings from the dev set produced by the classifier (top) vs. Siamese (bottom) models. Word labels seen at training time are denoted by triangles and word labels unseen at training time are denoted by circles."
],
"file": [
"2-Figure1-1.png",
"5-Figure2-1.png",
"5-Table1-1.png",
"5-Table2-1.png",
"6-Figure3-1.png"
]
}
|
2003.05522
|
Semantic Holism and Word Representations in Artificial Neural Networks
|
Artificial neural networks are a state-of-the-art solution for many problems in natural language processing. What can we learn about language and meaning from the way artificial neural networks represent it? Word representations obtained from the Skip-gram variant of the word2vec model exhibit interesting semantic properties. This is usually explained by referring to the general distributional hypothesis, which states that the meaning of the word is given by the contexts where it occurs. We propose a more specific approach based on Frege's holistic and functional approach to meaning. Taking Tugendhat's formal reinterpretation of Frege's work as a starting point, we demonstrate that it is analogical to the process of training the Skip-gram model and offers a possible explanation of its semantic properties.
|
{
"section_name": [
"INTRODUCTION",
"INTRODUCTION ::: Related work",
"SEMANTIC HOLISM AND ATOMISM",
"SEMANTIC HOLISM AND ATOMISM ::: Atomism",
"SEMANTIC HOLISM AND ATOMISM ::: Holism",
"WORD REPRESENTATIONS IN AI",
"WORD REPRESENTATIONS IN AI ::: Semantic properties of the Skip-Gram model",
"RELEVANT THEORIES OF MEANING",
"RELEVANT THEORIES OF MEANING ::: The distributional hypothesis",
"RELEVANT THEORIES OF MEANING ::: The use theory of meaning",
"RELEVANT THEORIES OF MEANING ::: Structuralism",
"SKIP-GRAM AND TRUTH-VALUE POTENTIAL",
"SKIP-GRAM AND TRUTH-VALUE POTENTIAL ::: The truth-value potential",
"SKIP-GRAM AND TRUTH-VALUE POTENTIAL ::: Word2vec models and semantic holism",
"CONCLUSION AND FUTURE WORK"
],
"paragraphs": [
[
"“Meaning is, therefore, something that words have in sentences; and it's something that sentences have in a language.” BIBREF0 On the other hand, meaning could also be something that words have on their own, with sentences being compositions and language a collection of words. This is the question of semantic holism versus atomism, which was important in the philosophy of language in the second half of the 20th century and has not been satisfyingly answered yet.",
"Artificial neural networks are the state-of-the-art solution for many problems in natural language processing (and machine learning in general). They produce word representation with interesting properties, but the way they work is little understood from the perspective of linguistics or the philosophy of language.",
"We believe that by finding parallels between concepts in AI and the philosophy of language, we can better understand both areas.",
"In this paper, we present an analogy between meaning defined as truth-value potential (a reformulation of Fregean holistic and functional approach) and a variant of language representation model, therefore pointing out a possibility that its “striking syntactic and semantic properties” BIBREF1 are formed due to adhering to holistic principles."
],
[
"We have found only one work concerning the philosophical aspects of neural language models BIBREF2. It is, however, concentrating on Self-Organizing Maps and Quine's version of semantic holism.",
"There are papers showing that Skip-gram with negative sampling is implicitly a factorization of a word-context matrix (e.g. BIBREF3, although this result was later contested by various authors, such as BIBREF4 and BIBREF5), or deriving the equations in an alternative way BIBREF6 (discussed more in Section SECREF3). This may tell us something about the model, but it does not answer the principal question: why should the matrix factorized in a certain way contain semantic information?"
],
[
"Semantic holism (or meaning holism) is “the thesis that what a linguistic expression means depends on its relations to many or all other expressions within the same totality. [...] The totality in question may be the language to which the expressions belong, or a theory formulation in that language.” BIBREF7 The opposing view is called semantic atomism, and it claims that there are expressions (typically words), whose meaning does not depend on the meaning of other expressions. The meaning of these expressions is given by something outside language (e.g. their relation to physical or mental objects).",
"In the following sections, we will specify the implications of both alternatives for semantics. The question also plays a role in cognitive science (content identity and similarity), epistemology (commensurability of theories) and seems to be strongly connected with the analytic/synthetic distinction BIBREF0. There are other positions in between these two, such as semantic molecularism or the belief that neither relations external nor internal are primary in forming meaning. However, to keep this text simple, we will only concentrate on extreme positions. We will also only talk about words, although the same argument can be used with smaller meaningful language units (e.g. parts of a compound word).",
"Our goal is not to asses whether the truth lies with holism, atomism or neither of them. We will only show that holism is a useful perspective when understanding neural language models is concerned.",
"Before we get into details of the two perspectives, let us point out two critical aspects of their difference: holism proclaims interdependence of meanings of words, contrary to their independence in atomism. And holism favours decomposition over composition."
],
[
"“It is a widely held view that much of the history of the philosophy of language consists of a failed attempt to make semantic atomism work.” BIBREF0",
"Atomism played an important role in analytic philosophy, starting with Bertrand Russell's logical atomism and continuing with logical positivism, as exemplified in this quote by Carnap BIBREF8:",
"A language consists of a vocabulary and a syntax, i.e. a set of words which have meanings and rules of sentence formation. These rules indicate how sentences may be formed out of the various sorts of words.",
"For logical positivists, words have meaning, because they refer to objects (be it physical, sensual, logical, mathematical or other). The rules of composition determine the meaning of sentences (and rule out senseless sequences of words).",
"Under this (or similar) view, the fact that words refer to the outside world is presupposed. Their references are independent of each other (that “dog” refers to dog is independent of that “horse” refers to horse). There is strong emphasis on compositionality, that reached its peak in Chomskian linguistics and is still relevant today.",
"Crucially, this means that a word can have meaning on its own (e.g. by referring to something). The meaning of larger units, such as sentences, is derived by the rules of composition from the meaning of words."
],
[
"Semantic holism accents the interdependence of meaning. The whole (language, theory, ...) is the primary vehicle of meaning. The meaning of smaller units is derived by decomposition.",
"This view is motivated by the same word having a different meaning in a different context. Gottlob Frege has shown BIBREF9 that even such seemingly unambiguous words as numbers play distinct roles in different situations: “5 is a prime number” and “there are 5 cows on the meadow” are different at least in that the first “5” signifies a complete (abstract) object, while the second one needs to be supplemented with information that it is cattle of which there are 5 specimens, otherwise the expression would not be grammatical.",
"Frege promoted what we could call sentence holism: “Only in the context of a sentence does a word have a meaning.” BIBREF10 We will later use its modern reformulation to show an analogy with certain neural language models and therefore their holistic character.",
"Another group of arguments for holism consist of variations on the theme of impossibility of knowing or using a word without being able to use other words. For example, it could be argued that a person could not correctly use the word “mammal”, without also knowing (at least some of) “bird”, “animal” and kinds of animals. Therefore the meaning of words cannot be formed in isolation.",
"Something that is harder to explain under holism than under atomism is the fact that words refer to objects. If the meaning of words is given by other words, how is it connected to the world around us? However, not all words refer to something. And even if subscribing to holism makes explaining reference harder, it may be because it is a hard problem to explain.",
"Another thing that is simpler under atomism is compositionality. While in atomism it plays a central role as one of the presupposed properties of language, holism may not need it. But it does not claim that words do not have meanining at all, only that it is derived (by some sort of decomposition) from the meaning of the whole."
],
[
"Although all artificial neural networks that work with language must have some way of representing it, the most interesting representations come from neural language models. Language modelling is a task of predicting a missing word from a sequence or generating text. There is also a similar class of models that are designed specifically to produce representations of language units, which we will call neural language representation models.",
"The representations (also called embeddings) are high dimensional vectors of real numbers. They are either learned together with the rest of the network for the particular task or pretrained by a general language representation model (typically on a larger dataset not specific for the task).",
"Some neural language (representation) models produce representation with semantic properties, although the task of language modeling itself is not (at least at the first sight) directly connected with semantics and no explicit semantic annotation is given to the neural network.",
"These semantic properties became popular with the invention of the word2vec software and the Skip-gram model, whose author said about it BIBREF1:",
"The model itself has no knowledge of syntax or morphology or semantics. Remarkably, training such a purely lexical model to maximize likelihood will induce word representations with striking syntactic and semantic properties.",
"However, they did not present any explanation of the phenomenon.",
"Goldberg and Levy BIBREF6 present a detailed derivation of the central equation of the Skip-gram model. In the last section they say:",
"Why does this produce good word representations?",
"Good question. We don't really know.",
"The distributional hypothesis states that words in similar contexts have similar meanings. The objective [of the Skip-gram model] clearly tries to increase the [dot product of the context and the word representations] for good word-context pairs, and decrease it for bad ones. Intuitively, this means that words that share many contexts will be similar to each other (note also that contexts sharing many words will also be similar to each other). This is, however, very hand-wavy. Can we make this intuition more precise? We'd really like to see something more formal.",
"We believe that the implicit holistic component of this “hand-wavy” approach is central to the quality of Skip-gram representations and we can make the intuition more precise by analogy with the definition of the truth-value potential."
],
[
"The Skip-gram model was introduced by Tomáš Mikolov et al. BIBREF11 as a method to efficiently train word embeddings. It exceeded state-of-the-art in various semantic tasks. The embeddings have interesting semantic properties, most notably the vector arithmetic illustrated by Figure FIGREF4 and the following equation BIBREF1:",
"meaning that starting with the word “king”, if we subtract the vector for the word “man” and add the vector for the word “woman”, the nearest vector in the embedding space will be the one that corresponds to the word “queen”. This means that queen is to woman as king is to man.",
"Hollis et al. BIBREF12 show that it is possible to infer various psycholinguistic and semantic properties of words from the Skip-gram embeddings. Mikolov et al. BIBREF13 also trained the Skip-gram model with phrases, resulting in even simpler and more elegant equations, such as",
"Mikolov et al. BIBREF11 proposed another shallow neural language model, Continuous Bag of Words (CBOW). The main difference between CBOW and Skip-gram (see Figure FIGREF6) is that while Skip-gram predicts context words from a given word, CBOW predicts a word from a given context."
],
[
"In this section, we discuss theories of meaning that are relevant to word representations in artificial neural networks. Notice that even though they strictly speaking do not require meaning holism, they all lean towards it quite strongly."
],
[
"Holism is generally a better alternative in cases where there is nothing beside language itself to anchor meaning to. This is the case of neural language (representation) models. If they represent meaning at all, it must be derived from the training corpus. This may be the reason behind the popularity of the distributional hypothesis in neural language model literature. The famous saying by Firth BIBREF14, “You shall know a word by the company it keeps!”, is quoted in majority of papers concerned with vector space models of language.",
"The general distributional hypothesis states that the meaning of a word is given by the contexts in which it occurs. It is, however, worth noticing that in Firth's theory, collocation is just one among multiple levels of meaning and his text does not support the idea of meaning based on context alone.",
"A more suitable formulation of the distributional hypothesis (referenced in connection to Skip-gram in BIBREF15) is found in Distributional structure BIBREF16, where it is suggested that distribution may be used for comparing meanings and that “difference of meaning correlates with difference of distribution”.",
"Although this certainly describes a basic principle of neural language models, it is still rather vague."
],
[
"The use theory of meaning can be summed up as “the meaning of a word is its use in the language” BIBREF17. It is associated with late Wittgenstein's concept of language game. In Philosophical Investigations BIBREF17, he writes:",
"To say “This combination of words makes no sense” excludes it from the sphere of language and thereby bounds the domain of language. [...] When a sentence is called senseless, it is not as it were its sense that is senseless. But a combination of words is being excluded from the language, withdrawn from circulation.",
"This “bounding of the domain of language” is precisely what language model does, therefore the use theory may be one way to connect language modelling and semantics.",
"That “knowledge of language emerges from language use” is also one of main hypotheses of cognitive linguistics BIBREF18."
],
[
"In structuralism BIBREF19, the meaning of a word is given by its relation to the other words of the language:",
"The elements of a structure have neither extrinsic designation, nor intrinsic signification. Then what is left? [...] [N]othing other than a sense [...]: a sense which is necessarily and uniquely “positional.” BIBREF20",
"This holds for word representations in artificial neural networks as well. The vectors representing the words do not have any other meaning than their position among the rest of the vectors and a single vector does not have any significance outside the model. This is also demonstrated by the vectors being different every time the model is trained because of random initialization."
],
[
"In this section, we introduce the truth-value potential and show that Skip-gram corresponds to it better than CBOW."
],
[
"Tugendhat's compact reformulation of Frege's sentence holism, the definition of meaning as truth-value potential is BIBREF21:",
"[T]wo expressions $\\phi $ and $\\psi $ have the same truth-value potential if and only if, whenever each is completed by the same expression to form a sentence, the two sentences have the same truth-value.",
"We can also express this definition in the following form:",
"where $M$ is the truth-value potential (meaning), $T$ is the truth-value of the sentence and $x(\\omega )$ is the result of completing the expression $\\omega $ by the expression $x$ to form a sentence.",
"One important aspect of this definition is that, following Frege BIBREF10, it is based on an assumption that the sentence (or rather the corresponding judgement) is the basic unit of meaning."
],
[
"The definition of meaning as truth-value potential is analogous to the process of training a model for word representations. One difference is that when we are training a model, we do not have the whole of language at our disposal. Even after approximating the language with a finite corpus, it still is not practical to compare all the contexts for a given word at the same time, therefore the universal quantifier has to be replaced by an iterative process of examining the contexts one by one (or actually batch by batch, which is a step back towards the totality that is being estimated). And we have no means to asses whether the sentences from the corpus are true or false. We can either assume that they are mostly true, or try to replace the concept of truth with something else (maybe language use). Even the first option seems to be enough—imagine a corpus full of false sentences about cats, e.g. “Cats can fly.”, “Cats are cetaceans.” etc. We cannot expect the representation of the word “cats” in a model trained on this corpus to be any good, therefore the requirement for the corpus to consist mostly of true sentences is not excessive.",
"The simplest model that corresponds to this analogy is the Skip-gram model. It does just what is described in the definition – it fixes a word and goes through all the possible contexts. It compares the words based on the context. The context words are predicted and their representations are fixed (in a single training step), while the representation of a single word is learned. By learning the representation of a word from the representation of the context, Skip-gram complies to the principles of semantic holism. The analogy between the definition of truth-value potential and the process of training the Skip-gram model is one possible explanation for its semantic properties and its performance in semantic tasks.",
"The complementary CBOW architecture (see Figure FIGREF6) performs much worse in the evaluation of the semantic tasks BIBREF11. In CBOW, a missing word is predicted from its context. Therefore, in a single learning step, the representation of the missing word is fixed. What changes (and is learned) is the representation of the context words. By learning the representation of the context from the representation of the word, CBOW is implicitly conforming to semantic atomism: words are the basic units of meaning and the meaning of the broader context is derived from the atomic meaning of words. This may be the reason why CBOW does not exhibit the same semantic properties as Skip-gram and it performs worse in semantic tasks."
],
[
"The distributional hypothesis as an explanation for the semantic properties of neural language models should be expanded into a more detailed account. We show one possible way to do that via a Fregean approach to meaning.",
"Both the distributional hypothesis itself and Tugendhat's interpretation of Frege's work are examples of holistic approaches to meaning, where the meaning of the whole determines the meaning of parts. As we demonstrated on the opposition between Skip-gram and CBOW models, the distinction between semantic holism and atomism may play an essential role in semantic properties of neural language representations models.",
"We have demonstrated the connection between the Skip-gram model and the definition of meaning as truth-value potential. Although this is an isolated observation of an analogy between a specific model and a specific theory about meaning, it is a crucial step towards finding a theory of meaning that would correspond to the current results of NLP research, increasing our understanding of NLP and ultimately the language itself.",
"The direction of research from successful language technologies to properties of language itself offers many opportunities for inquiry, with very few being explored so far.",
"Many state-of-the-art models for natural language processing use smaller units than words for their input and output. This analysis could be extended to take this into account.",
"It might also be interesting to think about the philosophy of science in technical fields dominated by machine learning, but that is far beyond the scope of this paper.",
"This work has been supported by the grant 18-02196S of the Czech Science Foundation. This research was partially supported by SVV project number 260 575."
]
]
}
|
{
"question": [
"How does Frege's holistic and functional approach to meaning relates to general distributional hypothesis?",
"What does Frege's holistic and functional approach to meaning states?"
],
"question_id": [
"86a93a2d1c19cd0cd21ad1608f2a336240725700",
"6090d3187c41829613abe785f0f3665d9ecd90d9"
],
"nlp_background": [
"zero",
"zero"
],
"topic_background": [
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no"
],
"search_query": [
"",
""
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"interpretation of Frege's work are examples of holistic approaches to meaning"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Both the distributional hypothesis itself and Tugendhat's interpretation of Frege's work are examples of holistic approaches to meaning, where the meaning of the whole determines the meaning of parts. As we demonstrated on the opposition between Skip-gram and CBOW models, the distinction between semantic holism and atomism may play an essential role in semantic properties of neural language representations models."
],
"highlighted_evidence": [
"Both the distributional hypothesis itself and Tugendhat's interpretation of Frege's work are examples of holistic approaches to meaning, where the meaning of the whole determines the meaning of parts."
]
}
],
"annotation_id": [
"12cbe7b5338668d7496f2ee6247b5343f0c35ae3"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Only in the context of a sentence does a word have a meaning."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Frege promoted what we could call sentence holism: “Only in the context of a sentence does a word have a meaning.” BIBREF10 We will later use its modern reformulation to show an analogy with certain neural language models and therefore their holistic character."
],
"highlighted_evidence": [
"Frege promoted what we could call sentence holism: “Only in the context of a sentence does a word have a meaning.” BIBREF10"
]
}
],
"annotation_id": [
"68e12003b1ff69b600deee00c2035adeba083bc3"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
}
|
{
"caption": [
"Figure 1. Examples of embeddings semantic relations according to [18].",
"Figure 2. CBOW and Skip-gram language models according to [16]."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png"
]
}
|
1601.06068
|
Paraphrase Generation from Latent-Variable PCFGs for Semantic Parsing
|
One of the limitations of semantic parsing approaches to open-domain question answering is the lexicosyntactic gap between natural language questions and knowledge base entries -- there are many ways to ask a question, all with the same answer. In this paper we propose to bridge this gap by generating paraphrases of the input question with the goal that at least one of them will be correctly mapped to a knowledge-base query. We introduce a novel grammar model for paraphrase generation that does not require any sentence-aligned paraphrase corpus. Our key idea is to leverage the flexibility and scalability of latent-variable probabilistic context-free grammars to sample paraphrases. We do an extrinsic evaluation of our paraphrases by plugging them into a semantic parser for Freebase. Our evaluation experiments on the WebQuestions benchmark dataset show that the performance of the semantic parser significantly improves over strong baselines.
|
{
"section_name": [
"Introduction",
"Paraphrase Generation Using Grammars",
"Paraphrases Generation Algorithm",
"Bi-Layered L-PCFGs",
"Paraphrase Classification",
"Semantic Parsing using Paraphrasing",
"Ungrounded Graphs from Paraphrases",
"Grounded Graphs from Ungrounded Graphs",
"Learning",
"Experimental Setup",
"Evaluation Data and Metric",
"Baselines",
"Implementation Details",
"Results and Discussion",
"Conclusion",
"Acknowledgements"
],
"paragraphs": [
[
"Semantic parsers map sentences onto logical forms that can be used to query databases BIBREF0 , BIBREF1 , instruct robots BIBREF2 , extract information BIBREF3 , or describe visual scenes BIBREF4 . In this paper we consider the problem of semantically parsing questions into Freebase logical forms for the goal of question answering. Current systems accomplish this by learning task-specific grammars BIBREF5 , strongly-typed CCG grammars BIBREF6 , BIBREF7 , or neural networks without requiring any grammar BIBREF8 . These methods are sensitive to the words used in a question and their word order, making them vulnerable to unseen words and phrases. Furthermore, mismatch between natural language and Freebase makes the problem even harder. For example, Freebase expresses the fact that “Czech is the official language of Czech Republic” (encoded as a graph), whereas to answer a question like “What do people in Czech Republic speak?” one should infer people in Czech Republic refers to Czech Republic and What refers to the language and speak refers to the predicate official language.",
"We address the above problems by using paraphrases of the original question. Paraphrasing has shown to be promising for semantic parsing BIBREF9 , BIBREF10 , BIBREF11 . We propose a novel framework for paraphrasing using latent-variable PCFGs (L-PCFGs). Earlier approaches to paraphrasing used phrase-based machine translation for text-based QA BIBREF12 , BIBREF13 , or hand annotated grammars for KB-based QA BIBREF10 . We find that phrase-based statistical machine translation (MT) approaches mainly produce lexical paraphrases without much syntactic diversity, whereas our grammar-based approach is capable of producing both lexically and syntactically diverse paraphrases. Unlike MT based approaches, our system does not require aligned parallel paraphrase corpora. In addition we do not require hand annotated grammars for paraphrase generation but instead learn the grammar directly from a large scale question corpus.",
"The main contributions of this paper are two fold. First, we present an algorithm (§ \"Paraphrase Generation Using Grammars\" ) to generate paraphrases using latent-variable PCFGs. We use the spectral method of narayan-15 to estimate L-PCFGs on a large scale question treebank. Our grammar model leads to a robust and an efficient system for paraphrase generation in open-domain question answering. While CFGs have been explored for paraphrasing using bilingual parallel corpus BIBREF14 , ours is the first implementation of CFG that uses only monolingual data. Second, we show that generated paraphrases can be used to improve semantic parsing of questions into Freebase logical forms (§ \"Semantic Parsing using Paraphrasing\" ). We build on a strong baseline of reddylargescale2014 and show that our grammar model competes with MT baseline even without using any parallel paraphrase resources."
],
[
"Our paraphrase generation algorithm is based on a model in the form of an L-PCFG. L-PCFGs are PCFGs where the nonterminals are refined with latent states that provide some contextual information about each node in a given derivation. L-PCFGs have been used in various ways, most commonly for syntactic parsing BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 .",
"In our estimation of L-PCFGs, we use the spectral method of narayan-15, instead of using EM, as has been used in the past by matsuzaki-2005 and petrov-2006. The spectral method we use enables the choice of a set of feature functions that indicate the latent states, which proves to be useful in our case. It also leads to sparse grammar estimates and compact models.",
"The spectral method works by identifying feature functions for “inside” and “outside” trees, and then clusters them into latent states. Then it follows with a maximum likelihood estimation step, that assumes the latent states are represented by clusters obtained through the feature function clustering. For more details about these constructions, we refer the reader to cohen-13 and narayan-15.",
"The rest of this section describes our paraphrase generation algorithm."
],
[
"We define our paraphrase generation task as a sampling problem from an L-PCFG $G_{\\mathrm {syn}}$ , which is estimated from a large corpus of parsed questions. Once this grammar is estimated, our algorithm follows a pipeline with two major steps.",
"We first build a word lattice $W_q$ for the input question $q$ . We use the lattice to constrain our paraphrases to a specific choice of words and phrases that can be used. Once this lattice is created, a grammar $G_{\\mathrm {syn}}^{\\prime }$ is then extracted from $G_{\\mathrm {syn}}$ . This grammar is constrained to the lattice.",
"We experiment with three ways of constructing word lattices: naïve word lattices representing the words from the input question only, word lattices constructed with the Paraphrase Database BIBREF14 and word lattices constructed with a bi-layered L-PCFG, described in § \"Bi-Layered L-PCFGs\" . For example, Figure 1 shows an example word lattice for the question What language do people in Czech Republic speak? using the lexical and phrasal rules from the PPDB.",
"Once $G_{\\mathrm {syn}}^{\\prime }$ is generated, we sample paraphrases of the input question $q$ . These paraphrases are further filtered with a classifier to improve the precision of the generated paraphrases.",
"We train the L-PCFG $G_{\\mathrm {syn}}$ on the Paralex corpus BIBREF9 . Paralex is a large monolingual parallel corpus, containing 18 million pairs of question paraphrases with 2.4M distinct questions in the corpus. It is suitable for our task of generating paraphrases since its large scale makes our model robust for open-domain questions. We construct a treebank by parsing 2.4M distinct questions from Paralex using the BLLIP parser BIBREF25 .",
"Given the treebank, we use the spectral algorithm of narayan-15 to learn an L-PCFG for constituency parsing to learn $G_{\\mathrm {syn}}$ . We follow narayan-15 and use the same feature functions for the inside and outside trees as they use, capturing contextual syntactic information about nonterminals. We refer the reader to narayan-15 for more detailed description of these features. In our experiments, we set the number of latent states to 24.",
"Once we estimate $G_{\\mathrm {syn}}$ from the Paralex corpus, we restrict it for each question to a grammar $G_{\\mathrm {syn}}^{\\prime }$ by keeping only the rules that could lead to a derivation over the lattice. This step is similar to lexical pruning in standard grammar-based generation process to avoid an intermediate derivation which can never lead to a successful derivation BIBREF26 , BIBREF27 .",
"Sampling a question from the grammar $G_{\\mathrm {syn}}^{\\prime }$ is done by recursively sampling nodes in the derivation tree, together with their latent states, in a top-down breadth-first fashion. Sampling from the pruned grammar $G_{\\mathrm {syn}}^{\\prime }$ raises an issue of oversampling words that are more frequent in the training data. To lessen this problem, we follow a controlled sampling approach where sampling is guided by the word lattice $W_q$ . Once a word $w$ from a path $e$ in $W_q$ is sampled, all other parallel or conflicting paths to $e$ are removed from $W_q$ . For example, generating for the word lattice in Figure 1 , when we sample the word citizens, we drop out the paths “human beings”, “people's”, “the population”, “people” and “members of the public” from $W_q$ and accordingly update the grammar. The controlled sampling ensures that each sampled question uses words from a single start-to-end path in $W_q$ . For example, we could sample a question what is Czech Republic 's language? by sampling words from the path (what, language, do, people 's, in, Czech, Republic, is speaking, ?) in Figure 1 . We repeat this sampling process to generate multiple potential paraphrases.",
"The resulting generation algorithm has multiple advantages over existing grammar generation methods. First, the sampling from an L-PCFG grammar lessens the lexical ambiguity problem evident in lexicalized grammars such as tree adjoining grammars BIBREF27 and combinatory categorial grammars BIBREF28 . Our grammar is not lexicalized, only unary context-free rules are lexicalized. Second, the top-down sampling restricts the combinatorics inherent to bottom-up search BIBREF29 . Third, we do not restrict the generation by the order information in the input. The lack of order information in the input often raises the high combinatorics in lexicalist approaches BIBREF30 . In our case, however, we use sampling to reduce this problem, and it allows us to produce syntactically diverse questions. And fourth, we impose no constraints on the grammar thereby making it easier to maintain bi-directional (recursive) grammars that can be used both for parsing and for generation BIBREF31 ."
],
[
"As mentioned earlier, one of our lattice types is based on bi-layered PCFGs introduced here.",
"In their traditional use, the latent states in L-PCFGs aim to capture syntactic information. We introduce here the use of an L-PCFG with two layers of latent states: one layer is intended to capture the usual syntactic information, and the other aims to capture semantic and topical information by using a large set of states with specific feature functions.",
"To create the bi-layered L-PCFG, we again use the spectral algorithm of narayan-15 to estimate a grammar $G_{\\mathrm {par}}$ from the Paralex corpus. We use the word alignment of paraphrase question pairs in Paralex to map inside and outside trees of each nonterminals in the treebank to bag of word features. The number of latent states we use is 1,000.",
"Once the two feature functions (syntactic in $G_{\\mathrm {syn}}$ and semantic in $G_{\\mathrm {par}}$ ) are created, each nonterminal in the training treebank is assigned two latent states (cluster identifiers). Figure 2 shows an example annotation of trees for three paraphrase questions from the Paralex corpus. We compute the parameters of the bi-layered L-PCFG $G_{\\mathrm {layered}}$ with a simple frequency count maximum likelihood estimate over this annotated treebank. As such, $G_{\\mathrm {layered}}$ is a combination of $G_{\\mathrm {syn}}$ and $G_{\\mathrm {par}}$ , resulting in 24,000 latent states (24 syntactic x 1000 semantic).",
"Consider an example where we want to generate paraphrases for the question what day is nochebuena. Parsing it with $G_{\\mathrm {layered}}$ will lead to the leftmost hybrid structure as shown in Figure 2 . The assignment of the first latent states for each nonterminals ensures that we retrieve the correct syntactic representation of the sentence. Here, however, we are more interested in the second latent states assigned to each nonterminals which capture the paraphrase information of the sentence at various levels. For example, we have a unary lexical rule (NN-*-142 day) indicating that we observe day with NN of the paraphrase type 142. We could use this information to extract unary rules of the form (NN-*-142 $w$ ) in the treebank that will generate words $w$ which are paraphrases to day. Similarly, any node WHNP-*-291 in the treebank will generate paraphrases for what day, SBARQ-*-403, for what day is nochebuena. This way we will be able to generate paraphrases when is nochebuena and when is nochebuena celebrated as they both have SBARQ-*-403 as their roots.",
"To generate a word lattice $W_q$ for a given question $q$ , we parse $q$ with the bi-layered grammar $G_{\\mathrm {layered}}$ . For each rule of the form $X$ - $m_1$ - $m_2 \\rightarrow w$ in the bi-layered tree with $X \\in {\\cal P}$ , $m_1 \\in \\lbrace 1, \\ldots , 24 \\rbrace $ , $m_2 \\in \\lbrace 1, \\ldots , 1000 \\rbrace $ and $q$0 a word in $q$1 , we extract rules of the form $q$2 - $q$3 - $q$4 from $q$5 such that $q$6 . For each such $q$7 , we add a path $q$8 parallel to $q$9 in the word lattice."
],
[
"Our sampling algorithm overgenerates paraphrases which are incorrect. To improve its precision, we build a binary classifier to filter the generated paraphrases. We randomly select 100 distinct questions from the Paralex corpus and generate paraphrases using our generation algorithm with various lattice settings. We randomly select 1,000 pairs of input-sampled sentences and manually annotate them as “correct” or “incorrect” paraphrases. We train our classifier on this manually created training data. We follow madnani2012, who used MT metrics for paraphrase identification, and experiment with 8 MT metrics as features for our binary classifier. In addition, we experiment with a binary feature which checks if the sampled paraphrase preserves named entities from the input sentence. We use WEKA BIBREF32 to replicate the classifier of madnani2012 with our new feature. We tune the feature set for our classifier on the development data."
],
[
"In this section we describe how the paraphrase algorithm is used for converting natural language to Freebase queries. Following reddylargescale2014, we formalize the semantic parsing problem as a graph matching problem, i.e., finding the Freebase subgraph (grounded graph) that is isomorphic to the input question semantic structure (ungrounded graph).",
"This formulation has a major limitation that can be alleviated by using our paraphrase generation algorithm. Consider the question What language do people in Czech Republic speak?. The ungrounded graph corresponding to this question is shown in Figure 3 . The Freebase grounded graph which results in correct answer is shown in Figure 3 . Note that these two graphs are non-isomorphic making it impossible to derive the correct grounding from the ungrounded graph. In fact, at least 15% of the examples in our development set fail to satisfy isomorphic assumption. In order to address this problem, we use paraphrases of the input question to generate additional ungrounded graphs, with the aim that one of those paraphrases will have a structure isomorphic to the correct grounding. Figure 3 and Figure 3 are two such paraphrases which can be converted to Figure 3 as described in sec:groundedGraphs.",
"For a given input question, first we build ungrounded graphs from its paraphrases. We convert these graphs to Freebase graphs. To learn this mapping, we rely on manually assembled question-answer pairs. For each training question, we first find the set of oracle grounded graphs—Freebase subgraphs which when executed yield the correct answer—derivable from the question's ungrounded graphs. These oracle graphs are then used to train a structured perceptron model. These steps are discussed in detail below."
],
[
"We use GraphParser BIBREF7 to convert paraphrases to ungrounded graphs. This conversion involves three steps: 1) parsing the paraphrase using a CCG parser to extract syntactic derivations BIBREF33 , 2) extracting logical forms from the CCG derivations BIBREF34 , and 3) converting the logical forms to an ungrounded graph. The ungrounded graph for the example question and its paraphrases are shown in Figure 3 , Figure 3 and Figure 3 , respectively."
],
[
"The ungrounded graphs are grounded to Freebase subgraphs by mapping entity nodes, entity-entity edges and entity type nodes in the ungrounded graph to Freebase entities, relations and types, respectively. For example, the graph in Figure 3 can be converted to a Freebase graph in Figure 3 by replacing the entity node Czech Republic with the Freebase entity CzechRepublic, the edge (speak.arg $_2$ , speak.in) between $x$ and Czech Republic with the Freebase relation (location.country.official_language.2, location.country.official_language.1), the type node language with the Freebase type language.human_language, and the target node remains intact. The rest of the nodes, edges and types are grounded to null. In a similar fashion, Figure 3 can be grounded to Figure 3 , but not Figure 3 to Figure 3 . If no paraphrase is isomorphic to the target grounded grounded graph, our grounding fails."
],
[
"We use a linear model to map ungrounded graphs to grounded ones. The parameters of the model are learned from question-answer pairs. For example, the question What language do people in Czech Republic speak? paired with its answer $\\lbrace \\textsc {CzechLanguage}\\rbrace $ . In line with most work on question answering against Freebase, we do not rely on annotated logical forms associated with the question for training and treat the mapping of a question to its grounded graph as latent.",
"Let $q$ be a question, let $p$ be a paraphrase, let $u$ be an ungrounded graph for $p$ , and let $g$ be a grounded graph formed by grounding the nodes and edges of $u$ to the knowledge base $\\mathcal {K}$ (throughout we use Freebase as the knowledge base). Following reddylargescale2014, we use beam search to find the highest scoring tuple of paraphrase, ungrounded and grounded graphs $(\\hat{p}, \\hat{u}, \\hat{g})$ under the model $\\theta \\in \\mathbb {R}^n$ : $\n({\\hat{p},\\hat{u},\\hat{g}}) = \\operatornamewithlimits{arg\\,max}_{(p,u,g)} \\theta \\cdot \\Phi (p,u,g,q,\\mathcal {K})\\,,\n$ ",
"where $\\Phi (p, u, g, q, \\mathcal {K}) \\in \\mathbb {R}^n$ denotes the features for the tuple of paraphrase, ungrounded and grounded graphs. The feature function has access to the paraphrase, ungrounded and grounded graphs, the original question, as well as to the content of the knowledge base and the denotation $|g|_\\mathcal {K}$ (the denotation of a grounded graph is defined as the set of entities or attributes reachable at its target node). See sec:details for the features employed. The model parameters are estimated with the averaged structured perceptron BIBREF35 . Given a training question-answer pair $(q,\\mathcal {A})$ , the update is: $\n\\theta ^{t+1} \\leftarrow \\theta ^{t} + \\Phi (p^+, u^+, g^+, q,\n\\mathcal {K}) - \\Phi (\\hat{p}, \\hat{u}, \\hat{g}, q, \\mathcal {K})\\,,\n$ ",
"where $({p^+,u^+,g^+})$ denotes the tuple of gold paraphrase, gold ungrounded and grounded graphs for $q$ . Since we do not have direct access to the gold paraphrase and graphs, we instead rely on the set of oracle tuples, $\\mathcal {O}_{\\mathcal {K}, \\mathcal {A}}(q)$ , as a proxy: $\n(p^{+},u^{+},{g^{+}}) = \\operatornamewithlimits{arg\\,max}_{(p,u,g) \\in \\mathcal {O}_{\\mathcal {K},\\mathcal {A}}(q)} \\theta \\cdot \\Phi ({p,u,g,q,\\mathcal {K}})\\,,\n$ ",
"where $\\mathcal {O}_{\\mathcal {K}, \\mathcal {A}}(q)$ is defined as the set of tuples ( $p$ , $u$ , $g$ ) derivable from the question $q$ , whose denotation $|g|_\\mathcal {K}$ has minimal $F_1$ -loss against the gold answer $\\mathcal {A}$ . We find the oracle graphs for each question a priori by performing beam-search with a very large beam."
],
[
"Below, we give details on the evaluation dataset and baselines used for comparison. We also describe the model features and provide implementation details."
],
[
"We evaluate our approach on the WebQuestions dataset BIBREF5 . WebQuestions consists of 5,810 question-answer pairs where questions represents real Google search queries. We use the standard train/test splits, with 3,778 train and 2,032 test questions. For our development experiments we tune the models on held-out data consisting of 30% training questions, while for final testing we use the complete training data. We use average precision (avg P.), average recall (avg R.) and average F $_1$ (avg F $_1$ ) proposed by berantsemantic2013 as evaluation metrics."
],
[
"We use GraphParser without paraphrases as our baseline. This gives an idea about the impact of using paraphrases.",
"We compare our paraphrasing models with monolingual machine translation based model for paraphrase generation BIBREF24 , BIBREF36 . In particular, we use Moses BIBREF37 to train a monolingual phrase-based MT system on the Paralex corpus. Finally, we use Moses decoder to generate 10-best distinct paraphrases for the test questions."
],
[
"For WebQuestions, we use 8 handcrafted part-of-speech patterns (e.g., the pattern (DT)?(JJ.? $\\mid $ NN.?){0,2}NN.? matches the noun phrase the big lebowski) to identify candidate named entity mention spans. We use the Stanford CoreNLP caseless tagger for part-of-speech tagging BIBREF38 . For each candidate mention span, we retrieve the top 10 entities according to the Freebase API. We then create a lattice in which the nodes correspond to mention-entity pairs, scored by their Freebase API scores, and the edges encode the fact that no joint assignment of entities to mentions can contain overlapping spans. We take the top 10 paths through the lattice as possible entity disambiguations. For each possibility, we generate $n$ -best paraphrases that contains the entity mention spans. In the end, this process creates a total of $10n$ paraphrases. We generate ungrounded graphs for these paraphrases and treat the final entity disambiguation and paraphrase selection as part of the semantic parsing problem.",
"We use the features from reddylargescale2014. These include edge alignments and stem overlaps between ungrounded and grounded graphs, and contextual features such as word and grounded relation pairs. In addition to these features, we add two new real-valued features – the paraphrase classifier's score and the entity disambiguation lattice score.",
"We use beam search to infer the highest scoring graph pair for a question. The search operates over entity-entity edges and entity type nodes of each ungrounded graph. For an entity-entity edge, there are two operations: ground the edge to a Freebase relation, or skip the edge. Similarly, for an entity type node, there are two operations: ground the node to a Freebase type, or skip the node. We use a beam size of 100 in all our experiments."
],
[
"In this section, we present results from five different systems for our question-answering experiments: original, mt, naive, ppdb and bilayered. First two are baseline systems. Other three systems use paraphrases generated from an L-PCFG grammar. naive uses a word lattice with a single start-to-end path representing the input question itself, ppdb uses a word lattice constructed using the PPDB rules, and bilayered uses bi-layered L-PCFG to build word lattices. Note that naive does not require any parallel resource to train, ppdb requires an external paraphrase database, and bilayered, like mt, needs a parallel corpus with paraphrase pairs. We tune our classifier features and GraphParser features on the development data. We use the best setting from tuning for evaluation on the test data."
],
[
"We described a grammar method to generate paraphrases for questions, and applied it to a question answering system based on semantic parsing. We showed that using paraphrases for a question answering system is a useful way to improve its performance. Our method is rather generic and can be applied to any question answering system."
],
[
"The authors would like to thank Nitin Madnani for his help with the implementation of the paraphrase classifier. We would like to thank our anonymous reviewers for their insightful comments. This research was supported by an EPSRC grant (EP/L02411X/1), the H2020 project SUMMA (under grant agreement 688139), and a Google PhD Fellowship for the second author."
]
]
}
|
{
"question": [
"Do they evaluate the quality of the paraphrasing model?",
"How many paraphrases are generated per question?",
"What latent variables are modeled in the PCFG?",
"What are the baselines?"
],
"question_id": [
"117aa7811ed60e84d40cd8f9cb3ca78781935a98",
"c359ab8ebef6f60c5a38f5244e8c18d85e92761d",
"ad362365656b0b218ba324ae60701eb25fe664c1",
"423bb905e404e88a168e7e807950e24ca166306c"
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"search_query": [
"semantic parsing",
"semantic parsing",
"semantic parsing",
"semantic parsing"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"208951f0d5f93c878368122d70fd94c337104a5e"
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "10*n paraphrases, where n depends on the number of paraphrases that contain the entity mention spans",
"evidence": [
"For WebQuestions, we use 8 handcrafted part-of-speech patterns (e.g., the pattern (DT)?(JJ.? $\\mid $ NN.?){0,2}NN.? matches the noun phrase the big lebowski) to identify candidate named entity mention spans. We use the Stanford CoreNLP caseless tagger for part-of-speech tagging BIBREF38 . For each candidate mention span, we retrieve the top 10 entities according to the Freebase API. We then create a lattice in which the nodes correspond to mention-entity pairs, scored by their Freebase API scores, and the edges encode the fact that no joint assignment of entities to mentions can contain overlapping spans. We take the top 10 paths through the lattice as possible entity disambiguations. For each possibility, we generate $n$ -best paraphrases that contains the entity mention spans. In the end, this process creates a total of $10n$ paraphrases. We generate ungrounded graphs for these paraphrases and treat the final entity disambiguation and paraphrase selection as part of the semantic parsing problem."
],
"highlighted_evidence": [
"For each candidate mention span, we retrieve the top 10 entities according to the Freebase API. We then create a lattice in which the nodes correspond to mention-entity pairs, scored by their Freebase API scores, and the edges encode the fact that no joint assignment of entities to mentions can contain overlapping spans. We take the top 10 paths through the lattice as possible entity disambiguations. For each possibility, we generate $n$ -best paraphrases that contains the entity mention spans. In the end, this process creates a total of $10n$ paraphrases. "
]
}
],
"annotation_id": [
"12f2e670e6d94fab6636a8ef24121fc2f2100eeb"
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"syntactic information",
"semantic and topical information"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In their traditional use, the latent states in L-PCFGs aim to capture syntactic information. We introduce here the use of an L-PCFG with two layers of latent states: one layer is intended to capture the usual syntactic information, and the other aims to capture semantic and topical information by using a large set of states with specific feature functions."
],
"highlighted_evidence": [
"We introduce here the use of an L-PCFG with two layers of latent states: one layer is intended to capture the usual syntactic information, and the other aims to capture semantic and topical information by using a large set of states with specific feature functions."
]
}
],
"annotation_id": [
"727ec6309fb3d7beb4d8cf4455fe5c4778bb660e"
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"GraphParser without paraphrases",
"monolingual machine translation based model for paraphrase generation"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We use GraphParser without paraphrases as our baseline. This gives an idea about the impact of using paraphrases.",
"We compare our paraphrasing models with monolingual machine translation based model for paraphrase generation BIBREF24 , BIBREF36 . In particular, we use Moses BIBREF37 to train a monolingual phrase-based MT system on the Paralex corpus. Finally, we use Moses decoder to generate 10-best distinct paraphrases for the test questions."
],
"highlighted_evidence": [
"We use GraphParser without paraphrases as our baseline. This gives an idea about the impact of using paraphrases",
"We compare our paraphrasing models with monolingual machine translation based model for paraphrase generation BIBREF24 , BIBREF36 . In particular, we use Moses BIBREF37 to train a monolingual phrase-based MT system on the Paralex corpus. Finally, we use Moses decoder to generate 10-best distinct paraphrases for the test questions."
]
}
],
"annotation_id": [
"32749f613e7b20e5fde56cfe720b1ecddf2646ff"
],
"worker_id": [
"ab1027fb3232572ed0261cb9521d6d9f472e86e2"
]
}
]
}
|
{
"caption": [
"Figure 1: An example word lattice for the question What language do people in Czech Republic speak? using the lexical and phrasal rules from the PPDB.",
"Figure 2: Trees used for bi-layered L-PCFG training. The questions what day is nochebuena, when is nochebuena and when is nochebuena celebrated are paraphrases from the Paralex corpus. Each nonterminal is decorated with a syntactic label and two identifiers, e.g., for WP-7-254, WP is the syntactic label assigned by the BLLIP parser, 7 is the syntactic latent state, and 254 is the semantic latent state.",
"Figure 3: Ungrounded graphs for an input question and its paraphrases along with its correct grounded graph. The green squares indicate NL or Freebase entities, the yellow rectangles indicate unary NL predicates or Freebase types, the circles indicate NL or Freebase events, the edge labels indicate binary NL predicates or Freebase relations, and the red diamonds attach to the entity of interest (the answer to the question).",
"Table 1: Oracle statistics and results on the WebQuestions development set.",
"Table 2: Results on WebQuestions test dataset."
],
"file": [
"4-Figure1-1.png",
"5-Figure2-1.png",
"7-Figure3-1.png",
"9-Table1-1.png",
"9-Table2-1.png"
]
}
|
1709.07916
|
Characterizing Diabetes, Diet, Exercise, and Obesity Comments on Twitter
|
Social media provide a platform for users to express their opinions and share information. Understanding public health opinions on social media, such as Twitter, offers a unique approach to characterizing common health issues such as diabetes, diet, exercise, and obesity (DDEO), however, collecting and analyzing a large scale conversational public health data set is a challenging research task. The goal of this research is to analyze the characteristics of the general public's opinions in regard to diabetes, diet, exercise and obesity (DDEO) as expressed on Twitter. A multi-component semantic and linguistic framework was developed to collect Twitter data, discover topics of interest about DDEO, and analyze the topics. From the extracted 4.5 million tweets, 8% of tweets discussed diabetes, 23.7% diet, 16.6% exercise, and 51.7% obesity. The strongest correlation among the topics was determined between exercise and obesity. Other notable correlations were: diabetes and obesity, and diet and obesity DDEO terms were also identified as subtopics of each of the DDEO topics. The frequent subtopics discussed along with Diabetes, excluding the DDEO terms themselves, were blood pressure, heart attack, yoga, and Alzheimer. The non-DDEO subtopics for Diet included vegetarian, pregnancy, celebrities, weight loss, religious, and mental health, while subtopics for Exercise included computer games, brain, fitness, and daily plan. Non-DDEO subtopics for Obesity included Alzheimer, cancer, and children. With 2.67 billion social media users in 2016, publicly available data such as Twitter posts can be utilized to support clinical providers, public health experts, and social scientists in better understanding common public opinions in regard to diabetes, diet, exercise, and obesity.
|
{
"section_name": [
"Introduction",
"Methods",
"Data Collection",
"Topic Discovery",
"Topic Content Analysis",
"Results",
"Discussion",
"Conclusion",
"Conflict of interest",
"Acknowledgement"
],
"paragraphs": [
[
"The global prevalence of obesity has doubled between 1980 and 2014, with more than 1.9 billion adults considered as overweight and over 600 million adults considered as obese in 2014 BIBREF0 . Since the 1970s, obesity has risen 37 percent affecting 25 percent of the U.S. adults BIBREF1 . Similar upward trends of obesity have been found in youth populations, with a 60% increase in preschool aged children between 1990 and 2010 BIBREF2 . Overweight and obesity are the fifth leading risk for global deaths according to the European Association for the Study of Obesity BIBREF0 . Excess energy intake and inadequate energy expenditure both contribute to weight gain and diabetes BIBREF3 , BIBREF4 .",
"Obesity can be reduced through modifiable lifestyle behaviors such as diet and exercise BIBREF4 . There are several comorbidities associated with being overweight or obese, such as diabetes BIBREF5 . The prevalence of diabetes in adults has risen globally from 4.7% in 1980 to 8.5% in 2014. Current projections estimate that by 2050, 29 million Americans will be diagnosed with type 2 diabetes, which is a 165% increase from the 11 million diagnosed in 2002 BIBREF6 . Studies show that there are strong relations among diabetes, diet, exercise, and obesity (DDEO) BIBREF7 , BIBREF4 , BIBREF8 , BIBREF9 ; however, the general public's perception of DDEO remains limited to survey-based studies BIBREF10 .",
"The growth of social media has provided a research opportunity to track public behaviors, information, and opinions about common health issues. It is estimated that the number of social media users will increase from 2.34 billion in 2016 to 2.95 billion in 2020 BIBREF11 . Twitter has 316 million users worldwide BIBREF12 providing a unique opportunity to understand users' opinions with respect to the most common health issues BIBREF13 . Publicly available Twitter posts have facilitated data collection and leveraged the research at the intersection of public health and data science; thus, informing the research community of major opinions and topics of interest among the general population BIBREF14 , BIBREF15 , BIBREF16 that cannot otherwise be collected through traditional means of research (e.g., surveys, interviews, focus groups) BIBREF17 , BIBREF18 . Furthermore, analyzing Twitter data can help health organizations such as state health departments and large healthcare systems to provide health advice and track health opinions of their populations and provide effective health advice when needed BIBREF13 .",
"Among computational methods to analyze tweets, computational linguistics is a well-known developed approach to gain insight into a population, track health issues, and discover new knowledge BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 . Twitter data has been used for a wide range of health and non-health related applications, such as stock market BIBREF23 and election analysis BIBREF24 . Some examples of Twitter data analysis for health-related topics include: flu BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 , BIBREF30 , mental health BIBREF31 , Ebola BIBREF32 , BIBREF33 , Zika BIBREF34 , medication use BIBREF35 , BIBREF36 , BIBREF37 , diabetes BIBREF38 , and weight loss and obesity BIBREF39 , BIBREF40 , BIBREF41 , BIBREF42 , BIBREF21 .",
"The previous Twitter studies have dealt with extracting common topics of one health issue discussed by the users to better understand common themes; however, this study utilizes an innovative approach to computationally analyze unstructured health related text data exchanged via Twitter to characterize health opinions regarding four common health issues, including diabetes, diet, exercise and obesity (DDEO) on a population level. This study identifies the characteristics of the most common health opinions with respect to DDEO and discloses public perception of the relationship among diabetes, diet, exercise and obesity. These common public opinions/topics and perceptions can be used by providers and public health agencies to better understand the common opinions of their population denominators in regard to DDEO, and reflect upon those opinions accordingly."
],
[
"Our approach uses semantic and linguistics analyses for disclosing health characteristics of opinions in tweets containing DDEO words. The present study included three phases: data collection, topic discovery, and topic-content analysis."
],
[
"This phase collected tweets using Twitter's Application Programming Interfaces (API) BIBREF43 . Within the Twitter API, diabetes, diet, exercise, and obesity were selected as the related words BIBREF4 and the related health areas BIBREF19 . Twitter's APIs provides both historic and real-time data collections. The latter method randomly collects 1% of publicly available tweets. This paper used the real-time method to randomly collect 10% of publicly available English tweets using several pre-defined DDEO-related queries (Table TABREF6 ) within a specific time frame. We used the queries to collect approximately 4.5 million related tweets between 06/01/2016 and 06/30/2016. The data will be available in the first author's website. Figure FIGREF3 shows a sample of collected tweets in this research."
],
[
"To discover topics from the collected tweets, we used a topic modeling approach that fuzzy clusters the semantically related words such as assigning “diabetes\", “cancer\", and “influenza\" into a topic that has an overall “disease\" theme BIBREF44 , BIBREF45 . Topic modeling has a wide range of applications in health and medical domains such as predicting protein-protein relationships based on the literature knowledge BIBREF46 , discovering relevant clinical concepts and structures in patients' health records BIBREF47 , and identifying patterns of clinical events in a cohort of brain cancer patients BIBREF48 .",
"Among topic models, Latent Dirichlet Allocation (LDA) BIBREF49 is the most popular effective model BIBREF50 , BIBREF19 as studies have shown that LDA is an effective computational linguistics model for discovering topics in a corpus BIBREF51 , BIBREF52 . LDA assumes that a corpus contains topics such that each word in each document can be assigned to the topics with different degrees of membership BIBREF53 , BIBREF54 , BIBREF55 .",
"Twitter users can post their opinions or share information about a subject to the public. Identifying the main topics of users' tweets provides an interesting point of reference, but conceptualizing larger subtopics of millions of tweets can reveal valuable insight to users' opinions. The topic discovery component of the study approach uses LDA to find main topics, themes, and opinions in the collected tweets.",
"We used the Mallet implementation of LDA BIBREF49 , BIBREF56 with its default settings to explore opinions in the tweets. Before identifying the opinions, two pre-processing steps were implemented: (1) using a standard list for removing stop words, that do not have semantic value for analysis (such as “the\"); and, (2) finding the optimum number of topics. To determine a proper number of topics, log-likelihood estimation with 80% of tweets for training and 20% of tweets for testing was used to find the highest log-likelihood, as it is the optimum number of topics BIBREF57 . The highest log-likelihood was determined 425 topics."
],
[
"The topic content analysis component used an objective interpretation approach with a lexicon-based approach to analyze the content of topics. The lexicon-based approach uses dictionaries to disclose the semantic orientation of words in a topic. Linguistic Inquiry and Word Count (LIWC) is a linguistics analysis tool that reveals thoughts, feelings, personality, and motivations in a corpus BIBREF58 , BIBREF59 , BIBREF60 . LIWC has accepted rate of sensitivity, specificity, and English proficiency measures BIBREF61 . LIWC has a health related dictionary that can help to find whether a topic contains words associated with health. In this analysis, we used LIWC to find health related topics."
],
[
"Obesity and Diabetes showed the highest and the lowest number of tweets (51.7% and 8.0%). Diet and Exercise formed 23.7% and 16.6% of the tweets (Table TABREF6 ).",
"Out of all 4.5 million DDEO-related tweets returned by Tweeter's API, the LDA found 425 topics. We used LIWC to filter the detected 425 topics and found 222 health-related topics. Additionally, we labeled topics based on the availability of DDEO words. For example, if a topic had “diet\", we labeled it as a diet-related topic. As expected and driven by the initial Tweeter API's query, common topics were Diabetes, Diet, Exercise, and Obesity (DDEO). (Table TABREF7 ) shows that the highest and the lowest number of topics were related to exercise and diabetes (80 and 21 out of 222). Diet and Obesity had almost similar rates (58 and 63 out of 222).",
"Each of the DDEO topics included several common subtopics including both DDEO and non-DDEO terms discovered by the LDA algorithm (Table TABREF7 ). Common subtopics for “Diabetes\", in order of frequency, included type 2 diabetes, obesity, diet, exercise, blood pressure, heart attack, yoga, and Alzheimer. Common subtopics for “Diet\" included obesity, exercise, weight loss [medicine], celebrities, vegetarian, diabetes, religious diet, pregnancy, and mental health. Frequent subtopics for “Exercise\" included fitness, obesity, daily plan, diet, brain, diabetes, and computer games. And finally, the most common subtopics for “Obesity\" included diet, exercise, children, diabetes, Alzheimer, and cancer (Table TABREF7 ). Table TABREF8 provides illustrative examples for each of the topics and subtopics.",
"Further exploration of the subtopics revealed additional patterns of interest (Tables TABREF7 and TABREF8 ). We found 21 diabetes-related topics with 8 subtopics. While type 2 diabetes was the most frequent of the sub-topics, heart attack, Yoga, and Alzheimer are the least frequent subtopics for diabetes. Diet had a wide variety of emerging themes ranging from celebrity diet (e.g., Beyonce) to religious diet (e.g., Ramadan). Diet was detected in 63 topics with 10 subtopics; obesity, and pregnancy and mental health were the most and the least discussed obesity-related topics, respectively. Exploring the themes for Exercise subtopics revealed subjects such as computer games (e.g., Pokemon-Go) and brain exercises (e.g., memory improvement). Exercise had 7 subtopics with fitness as the most discussed subtopic and computer games as the least discussed subtopic. Finally, Obesity themes showed topics such as Alzheimer (e.g., research studies) and cancer (e.g., breast cancer). Obesity had the lowest diversity of subtopics: six with diet as the most discussed subtopic, and Alzheimer and cancer as the least discussed subtopics.",
"Diabetes subtopics show the relation between diabetes and exercise, diet, and obesity. Subtopics of diabetes revealed that users post about the relationship between diabetes and other diseases such as heart attack (Tables TABREF7 and TABREF8 ). The subtopic Alzheimer is also shown in the obesity subtopics. This overlap between categories prompts the discussion of research and linkages among obesity, diabetes, and Alzheimer's disease. Type 2 diabetes was another subtopic expressed by users and scientifically documented in the literature.",
"The main DDEO topics showed some level of interrelationship by appearing as subtopics of other DDEO topics. The words with italic and underline styles in Table 2 demonstrate the relation among the four DDEO areas. Our results show users' interest about posting their opinions, sharing information, and conversing about exercise & diabetes, exercise & diet, diabetes & diet, diabetes & obesity, and diet & obesity (Figure FIGREF9 ). The strongest correlation among the topics was determined to be between exercise and obesity ( INLINEFORM0 ). Other notable correlations were: diabetes and obesity ( INLINEFORM1 ), and diet and obesity ( INLINEFORM2 )."
],
[
"Diabetes, diet, exercise, and obesity are common public health related opinions. Analyzing individual- level opinions by automated algorithmic techniques can be a useful approach to better characterize health opinions of a population. Traditional public health polls and surveys are limited by a small sample size; however, Twitter provides a platform to capture an array of opinions and shared information a expressed in the words of the tweeter. Studies show that Twitter data can be used to discover trending topics, and that there is a strong correlation between Twitter health conversations and Centers for Disease Control and Prevention (CDC) statistics BIBREF62 .",
"This research provides a computational content analysis approach to conduct a deep analysis using a large data set of tweets. Our framework decodes public health opinions in DDEO related tweets, which can be applied to other public health issues. Among health-related subtopics, there are a wide range of topics from diseases to personal experiences such as participating in religious activities or vegetarian diets.",
"Diabetes subtopics showed the relationship between diabetes and exercise, diet, and obesity (Tables TABREF7 and TABREF8 ). Subtopics of diabetes revealed that users posted about the relation between diabetes and other diseases such as heart attack. The subtopic Alzheimer is also shown in the obesity subtopics. This overlap between categories prompts the discussion of research and linkages among obesity, diabetes, and Alzheimer's disease. Type 2 diabetes was another subtopic that was also expressed by users and scientifically documented in the literature. The inclusion of Yoga in posts about diabetes is interesting. While yoga would certainly be labeled as a form of fitness, when considering the post, it was insightful to see discussion on the mental health benefits that yoga offers to those living with diabetes BIBREF63 .",
"Diet had the highest number of subtopics. For example, religious diet activities such as fasting during the month of Ramadan for Muslims incorporated two subtopics categorized under the diet topic (Tables TABREF7 and TABREF8 ). This information has implications for the type of diets that are being practiced in the religious community, but may help inform religious scholars who focus on health and psychological conditions during fasting. Other religions such as Judaism, Christianity, and Taoism have periods of fasting that were not captured in our data collection, which may have been due to lack of posts or the timeframe in which we collected data. The diet plans of celebrities were also considered influential to explaining and informing diet opinions of Twitter users BIBREF64 .",
"Exercise themes show the Twitter users' association of exercise with “brain\" benefits such as increased memory and cognitive performance (Tables TABREF7 and TABREF8 ) BIBREF65 . The topics also confirm that exercising is associated with controlling diabetes and assisting with meal planning BIBREF66 , BIBREF9 , and obesity BIBREF67 . Additionally, we found the Twitter users mentioned exercise topics about the use of computer games that assist with exercising. The recent mobile gaming phenomenon Pokeman-Go game BIBREF68 was highly associated with the exercise topic. Pokemon-Go allows users to operate in a virtual environment while simultaneously functioning in the real word. Capturing Pokemons, battling characters, and finding physical locations for meeting other users required physically activity to reach predefined locations. These themes reflect on the potential of augmented reality in increasing patients' physical activity levels BIBREF69 .",
"Obesity had the lowest number of subtopics in our study. Three of the subtopics were related to other diseases such as diabetes (Tables TABREF7 and TABREF8 ). The scholarly literature has well documented the possible linkages between obesity and chronic diseases such as diabetes BIBREF1 as supported by the study results. The topic of children is another prominent subtopic associated with obesity. There has been an increasing number of opinions in regard to child obesity and national health campaigns that have been developed to encourage physical activity among children BIBREF70 . Alzheimer was also identified as a topic under obesity. Although considered a perplexing finding, recent studies have been conducted to identify possible correlation between obesity and Alzheimer's disease BIBREF71 , BIBREF72 , BIBREF73 . Indeed, Twitter users have expressed opinions about the study of Alzheimer's disease and the linkage between these two topics.",
"This paper addresses a need for clinical providers, public health experts, and social scientists to utilize a large conversational dataset to collect and utilize population level opinions and information needs. Although our framework is applied to Twitter, the applications from this study can be used in patient communication devices monitored by physicians or weight management interventions with social media accounts, and support large scale population-wide initiatives to promote healthy behaviors and preventative measures for diabetes, diet, exercise, and obesity.",
"This research has some limitations. First, our DDEO analysis does not take geographical location of the Twitter users into consideration and thus does not reveal if certain geographical differences exists. Second, we used a limited number of queries to select the initial pool of tweets, thus perhaps missing tweets that may have been relevant to DDEO but have used unusual terms referenced. Third, our analysis only included tweets generated in one month; however, as our previous work has demonstrated BIBREF42 , public opinion can change during a year. Additionally, we did not track individuals across time to detect changes in common themes discussed. Our future research plans includes introducing a dynamic framework to collect and analyze DDEO related tweets during extended time periods (multiple months) and incorporating spatial analysis of DDEO-related tweets."
],
[
"This study represents the first step in developing routine processes to collect, analyze, and interpret DDEO-related posts to social media around health-related topics and presents a transdisciplinary approach to analyzing public discussions around health topics. With 2.34 billion social media users in 2016, the ability to collect and synthesize social media data will continue to grow. Developing methods to make this process more streamlined and robust will allow for more rapid identification of public health trends in real time.",
"Note: Amir Karami will handle correspondence at all stages of refereeing and publication."
],
[
"The authors state that they have no conflict of interest."
],
[
"This research was partially supported by the first author's startup research funding provided by the University of South Carolina, School of Library and Information Science. We thank Jill Chappell-Fail and Jeff Salter at the University of South Carolina College of Information and Communications for assistance with technical support.",
"References"
]
]
}
|
{
"question": [
"Do they evaluate only on English data?",
"How strong was the correlation between exercise and diabetes?",
"How were topics of interest about DDEO identified?"
],
"question_id": [
"e5ae8ac51946db7475bb20b96e0a22083b366a6d",
"18288c7b0f8bd7839ae92f9c293e7fb85c7e146a",
"b5e883b15e63029eb07d6ff42df703a64613a18a"
],
"nlp_background": [
"five",
"five",
"five"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"twitter",
"twitter",
"twitter"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"This phase collected tweets using Twitter's Application Programming Interfaces (API) BIBREF43 . Within the Twitter API, diabetes, diet, exercise, and obesity were selected as the related words BIBREF4 and the related health areas BIBREF19 . Twitter's APIs provides both historic and real-time data collections. The latter method randomly collects 1% of publicly available tweets. This paper used the real-time method to randomly collect 10% of publicly available English tweets using several pre-defined DDEO-related queries (Table TABREF6 ) within a specific time frame. We used the queries to collect approximately 4.5 million related tweets between 06/01/2016 and 06/30/2016. The data will be available in the first author's website. Figure FIGREF3 shows a sample of collected tweets in this research."
],
"highlighted_evidence": [
"This paper used the real-time method to randomly collect 10% of publicly available English tweets using several pre-defined DDEO-related queries (Table TABREF6 ) within a specific time frame. "
]
}
],
"annotation_id": [
"13493df9ec75ae877c9904e23729ff119814671f"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "weak correlation with p-value of 0.08",
"evidence": [
"The main DDEO topics showed some level of interrelationship by appearing as subtopics of other DDEO topics. The words with italic and underline styles in Table 2 demonstrate the relation among the four DDEO areas. Our results show users' interest about posting their opinions, sharing information, and conversing about exercise & diabetes, exercise & diet, diabetes & diet, diabetes & obesity, and diet & obesity (Figure FIGREF9 ). The strongest correlation among the topics was determined to be between exercise and obesity ( INLINEFORM0 ). Other notable correlations were: diabetes and obesity ( INLINEFORM1 ), and diet and obesity ( INLINEFORM2 ).",
"FLOAT SELECTED: Figure 2: DDEO Correlation P-Value"
],
"highlighted_evidence": [
"The main DDEO topics showed some level of interrelationship by appearing as subtopics of other DDEO topics.",
"Our results show users' interest about posting their opinions, sharing information, and conversing about exercise & diabetes, exercise & diet, diabetes & diet, diabetes & obesity, and diet & obesity (Figure FIGREF9 ).",
"FLOAT SELECTED: Figure 2: DDEO Correlation P-Value"
]
}
],
"annotation_id": [
"ea7f28bf7cf3afc36dfd4eade6a0235621cd2869"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "using topic modeling model Latent Dirichlet Allocation (LDA)",
"evidence": [
"To discover topics from the collected tweets, we used a topic modeling approach that fuzzy clusters the semantically related words such as assigning “diabetes\", “cancer\", and “influenza\" into a topic that has an overall “disease\" theme BIBREF44 , BIBREF45 . Topic modeling has a wide range of applications in health and medical domains such as predicting protein-protein relationships based on the literature knowledge BIBREF46 , discovering relevant clinical concepts and structures in patients' health records BIBREF47 , and identifying patterns of clinical events in a cohort of brain cancer patients BIBREF48 .",
"Among topic models, Latent Dirichlet Allocation (LDA) BIBREF49 is the most popular effective model BIBREF50 , BIBREF19 as studies have shown that LDA is an effective computational linguistics model for discovering topics in a corpus BIBREF51 , BIBREF52 . LDA assumes that a corpus contains topics such that each word in each document can be assigned to the topics with different degrees of membership BIBREF53 , BIBREF54 , BIBREF55 .",
"We used the Mallet implementation of LDA BIBREF49 , BIBREF56 with its default settings to explore opinions in the tweets. Before identifying the opinions, two pre-processing steps were implemented: (1) using a standard list for removing stop words, that do not have semantic value for analysis (such as “the\"); and, (2) finding the optimum number of topics. To determine a proper number of topics, log-likelihood estimation with 80% of tweets for training and 20% of tweets for testing was used to find the highest log-likelihood, as it is the optimum number of topics BIBREF57 . The highest log-likelihood was determined 425 topics."
],
"highlighted_evidence": [
"To discover topics from the collected tweets, we used a topic modeling approach that fuzzy clusters the semantically related words such as assigning “diabetes\", “cancer\", and “influenza\" into a topic that has an overall “disease\" theme BIBREF44 , BIBREF45 .",
"Among topic models, Latent Dirichlet Allocation (LDA) BIBREF49 is the most popular effective model BIBREF50 , BIBREF19 as studies have shown that LDA is an effective computational linguistics model for discovering topics in a corpus BIBREF51 , BIBREF52 .",
"We used the Mallet implementation of LDA BIBREF49 , BIBREF56 with its default settings to explore opinions in the tweets."
]
}
],
"annotation_id": [
"33c66527e46da56cb4033d4a47173f9aa136265d"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
]
}
|
{
"caption": [
"Figure 1: A Sample of Tweets",
"Table 1: DDEO Queries",
"Table 2: DDEO Topics and Subtopics - Diabetes, Diet, Exercise, and Obesity are shown with italic and underline styles in subtopics",
"Figure 2: DDEO Correlation P-Value",
"Table 3: Topics Examples"
],
"file": [
"3-Figure1-1.png",
"5-Table1-1.png",
"6-Table2-1.png",
"6-Figure2-1.png",
"7-Table3-1.png"
]
}
|
1909.00154
|
Rethinking travel behavior modeling representations through embeddings
|
This paper introduces the concept of travel behavior embeddings, a method for re-representing discrete variables that are typically used in travel demand modeling, such as mode, trip purpose, education level, family type or occupation. This re-representation process essentially maps those variables into a latent space called the \emph{embedding space}. The benefit of this is that such spaces allow for richer nuances than the typical transformations used in categorical variables (e.g. dummy encoding, contrasted encoding, principal components analysis). While the usage of latent variable representations is not new per se in travel demand modeling, the idea presented here brings several innovations: it is an entirely data driven algorithm; it is informative and consistent, since the latent space can be visualized and interpreted based on distances between different categories; it preserves interpretability of coefficients, despite being based on Neural Network principles; and it is transferrable, in that embeddings learned from one dataset can be reused for other ones, as long as travel behavior keeps consistent between the datasets. ::: The idea is strongly inspired on natural language processing techniques, namely the word2vec algorithm. Such algorithm is behind recent developments such as in automatic translation or next word prediction. Our method is demonstrated using a model choice model, and shows improvements of up to 60\% with respect to initial likelihood, and up to 20% with respect to likelihood of the corresponding traditional model (i.e. using dummy variables) in out-of-sample evaluation. We provide a new Python package, called PyTre (PYthon TRavel Embeddings), that others can straightforwardly use to replicate our results or improve their own models. Our experiments are themselves based on an open dataset (swissmetro).
|
{
"section_name": [
"Introduction",
"Representing categorical variables",
"The concept of text embeddings",
"Travel behaviour embeddings",
"Travel behaviour embeddings ::: The general idea",
"Travel behaviour embeddings ::: Methodology",
"An experiment with mode choice",
"An experiment with mode choice ::: The Swissmetro dataset",
"An experiment with mode choice ::: Principles for the model specification"
],
"paragraphs": [
[
"Since their early days, representation in random utility behavior models has followed generally quite clear principles. For example, numeric quantities like travel time and cost may be directly used or transformed depending on observed non-linear efects (e.g. using log). Numeric variables that are not “quantities\" per se, such as age or even geographic coordinates tend to be discretized and then transformed into vectors of dummy variables. Similarly, categorical variables such as education level or trip purpose are already discrete, and thus are also usually “dummyfied\". Then, we may interact any subset of the above by combining (typically, multiplying) them, as long as we get in the end a vector of numeric values that can be incorporated in a statistical model, a linear one in the case of the most common logit model.",
"There are however phenomena that are hard to represent, and modelers end up struggling to find the right representation. For example, influence of social interactions between different persons, hierarchical decision making, autocorrelated nature of time and space, or abstract concepts such as accessibility, attitudes, personality traits and so on. The point here, is that the nature of our models seems to enforce a compromise between the true semantics of a variable (i.e. the “meaning\" of a certain information for the decision making process) and its realisation in practice. And that further research should be done to find new representation paradigms.",
"Historically speaking, the natural language processing (NLP) field has had similar dilemmas for decades, and for a while two general trends were competing: the statistical modeling approaches, and the linguistic theory based approaches. The former relied on simple representations, such as vector frequencies, or dummy variables, to become practical, while the latter used domain knowledge such as grammars or logic. Until recently, neither had considerable success in making machines able to understand or generate human language, but developments in deep neural networks together with overwhelmingly massive amounts of data (i.e. the World Wide Web) brought them to a new area, where the two are approaching each other and achieving hitherto results considered extremely hard, such as question answering, translation, next word prediction. One of the key concepts in this revolution is that of embeddings, which will be further explained in this paper.",
"Our focus here is on the representation of categorical variables. The default paradigm is dummy variables (also known as “one-hot-encoding\" in machine learning literature), which have well-known limitations, namely the explosion of dimensionality and enforced ortogonality. The former happens because we assign one new “dummy\" variable to each of D-1 categories, and easily go from a small original variable specification to one with hundreds of variables, bringing problems in model estimation and analysis. This often affects the data collection process itself. Since one doesn't want to end up with too many categories, we might as well give less options in a survey, or decrease the resolution of a sensor. The problem of enforced ortogonality relates to the fact that, in a dummy encoding, all categories become equidistant. The similarity between “student\" and “employed\" is the same as between “student\" and “retired\", which in many cases (e.g. mode choice, departure time choice) goes against intuition. Other encoding methods exist, such as contrasted encoding or principal components analysis (PCA). The former ends up being a subtle variation on the dummy approach, but the latter already provides an interesting answer to the problem: categories are no longer forcibly equidistant, and the number of variables can be much reduced. However, it is a non-supervised approach. The distance between “student\" and “employed\" will always be the same, regardless of the problem we are solving, but this may be intuitively illogical if we consider car ownership versus departure time choice models for example.",
"The key idea in this paper is to introduce a method, called Travel Behavior embeddings, that borrows much from the NLP concept. This method serves to encode categorical variables, and is dependent on the problem at hand. We will focus on mode choice, and test on a well-known dataset, by comparing with both dummy and PCA encoding. All the dataset and code are made openly available, and the reader can follow and generate results him/herself using an iPython notebook included. Our ultimate goal is certainly that the reader reuses our PyTre package for own purposes.",
"This paper presents some results and conclusions, after a relatively long exploration and analysis process, including other datasets and code variations not mentioned here for interest of clarity and replicability. While we show these concepts to be promising and innovative in this paper, one should be wary of over-hyping yet another Machine Learning/Artificial Intelligence concept: after all, Machine Learning is still essentially based on statistics. In NLP, the number of different words in consideration at a given moment can be in order of tens of thousands, while our categorical variables rarely go beyond a few dozens. This means that, for example, it becomes clear later that the least number of original categories, the less the benefit of embeddings (in the limit, a binary variable like gender, is useless to do embeddings with), and also that if we do get a significantly large and statistically representative dataset, a dummy variables representation is sufficient. We will quickly see, however, that complexity can grow quick enough to justify an embeddings based method even if without the shockingly better performance observed in NLP applications."
],
[
"We are generally concerned with random utility maximization (RUM) models, for they have a dominant role in travel behavior modeling. The nature of such models is predominantly numeric, linear, and quite often strictly flat (notwithstanding hierarchical variations, such as nested models BIBREF1, hierarchical Bayes BIBREF2, or non-linear transformations). As a consequence, while numerical variables (e.g. travel time, cost, or income) can be directly used as available, perhaps subject to transformations or segmentation, nominal ones bring about a greater challenge. We tend to enforce a limited set of treatments such as:",
"Dummy variables, or one-hot encoding - for each categorical variable $v$ with D categories, we get D-1 binary variables (the “dummies\"). At each input vector $x_n$, with categorical value $v=d$, the value “1\" will be assigned to the corresponding dummy, while “0\" to all others. If $v$ corresponds to the “default\" category, all dummies are “0\".",
"Contrast encoding BIBREF3 - same as dummy encoding, but instead of “1\" for each category, we have a value that results from a contrasting formula. There are many different formulas (e.g. Helmert, Sum, Backward Difference), but all consist of subtracting the mean of the target variable, for a given category, with a general stastic (e.g. the mean of the dependent variable for all categories; the mean of the dependent variable in the previous category in an ordered list).",
"Principal Components Analysis (PCA) - run the PCA algorithm on the data matrix obtained by dummy representation of the categorical variable, then re-represent it with the corresponding projected eigenvector coefficients. One selects K eigenvectors (e.g. according to a variance explained rule), and thus each category is mapped to a vector of K real values.",
"Segmenting models, mixture models - A general alternative to categorical data representation is in fact to avoid it in the first place. One obvious method would be through creating hierarchical disaggregate methods (e.g. one per category). This is not in itself a representation paradigm, but an alternative way to see this problem. It certainly raises scalability and inference concerns.",
"In datasets where behavior heterogeneity is high, and number of observations is significantly smaller than population size, increasing dimensionality by adding a variable per each category is very risky because the amount of data that is in practice usable to estimate each new coefficient becomes insufficient. A simple intuition here is by considering that, for a dummy variable that is only “1\" for a few observations in the dataset, its coefficient will be “activated\" only that small number of times. If there is a lot of variance in the associated behavior, the variance of the coefficient will also be large, and the coefficient will be considered statistically insignificant.",
"The benefit of representations that map into a latent space, like embeddings and PCA, is that such a space is inevitably shared, and thus every observation contributes indirectly to all category variables. This comes with no interpretability cost, because one can always map to the “dummy\" space and analyse the individual coefficients, as will be shown in our experiments."
],
[
"The idea of text embeddings comes from a simple re-representation necessity. A natural-language processing system is itself also a numeric machine, therefore it requires each individual word in a dictionary to match its own numeric representation. Just as in our travel models, a possible solution has been to use dummy variables, and it is quite obvious that the dimensionality of such a one-hot encoding vector, quickly becomes overwhelming. Think for example next word prediction algorithm, like the one we have in our smartphones. It is essentially a skip-gram BIBREF4 model that predicts the next word, given the n words before. The English dictionary has about 300000 words, and if we have about 5 words before for context, the number of independent variables of the model would become 1.5 million!",
"The goal of text embeddings algorithms (e.g. Word2Vec BIBREF5) is to a) reduce the representation of each word to a computationally acceptable dimension, while simultaneously b) learning the semantic distance between different words. In other words, the euclidean distance of semantically related words (e.g. “dog\" and “cat\") in this new space should be smaller than unrelated words (e.g. “dog\" and “optimize\"). As mentioned before, in a dummy (or one-hot) encoding, all distances between words are equal by definition.",
"The word embeddings methodology is very well explained in several webpages such as BIBREF6, so the reader is strongly encouraged to visit them first. However, for the sake of completeness, we summarize here the general idea.",
"Imagine the following task: given a word $w_i$ in a text, predict the next word $w_o$. If we solve it with a neural network model, we could have the architecture in Figure FIGREF8, where the input consists simply of the one-hot-encoding representation of the word (i.e. one dummy variable for each word in a dictionary of dimensionality $D$), and the output corresponds to the probability of each word in the dictionary being the next one (also a vector with dimensionality $D$).",
"The output layer thus consists simply of a softmax function. In other words, exactly the classical multinomial logit formulation that we would have in an RUM, in which each different word corresponds to an “alternative\".",
"The concept of embeddings is directly associated to the hidden layer, which is a set of linear activation neurons, typically with a dimensionality $K<<D$. Each such neuron is simply an identity function: it sums all inputs; then propagates this sum to the output layer. Since only one input neuron is activated at a time (remember that the input is a one-hot-encoding vector, with one “1\" and the rest with “0\"), each hidden layer neuron just propagates the (single) weight that links to that input neuron. If we have enough data for training this model, we will eventually land on a situation where, for each input word, there is a fixed vector of weights that are directly used in the output (softmax) function, to generate the prediction. With more data, this weight vector will not change (down to some small delta threshold). These stable vectors are what we call embeddings, and the dimensionality of these vectors is called embedding size.",
"Formally, we have a dataset $\\mathcal {D}=\\lbrace x_n, y_n\\rbrace , n=1\\ldots N$, where each $x_n$ and $y_n$ are one-hot (dummy) encodings of categorical variables. The dimensionality of $x_n$ is $D\\times 1$, with $D$ being the number of different categories in $x_n$, while the dimensionality of $y_n$ is $C\\times 1$, with $C$ being the number of categories (alternatives) in $y_n$. The full expression for the embeddings model as described is:",
"where $W$ is the embeddings matrix of size $K\\times D$, where $K$ is called the embeddings size. $B$ is a matrix of coefficients ($C\\times K$) for the softmax layer, so $B_c$ is simply the coefficients (row) vector for output class (alternative) $c$, and $\\alpha _c$ is the corresponding intercept. The typical loss function used in such models is called the categorical cross entropy:",
"Where $\\delta _{i}$ is the kronecker delta ($\\delta _{true}=1; \\delta _{false}=0$), and $\\mathcal {L}(n)$ is the cumulative loss for an individual data point. This formalization is the simplest version, without loss of generality. In practice, as seen below, we will model multiple embeddings matrices simultaneously, and will add regularization terms to the loss function, so the models tested in this paper consist of compositions of the above.",
"So these so called embeddings are in fact a relatively shallow data representation in a simple neural network. What is their added value? Obviously, the first practical benefit is dimensionality reduction, because now there is a mapping between each of the $C$ words to a unique vector of size $K$. The second aspect is that this new representation is the one that maximizes the performance towards a specific task (in our example, prediction of the next word), therefore it is a supervised process, as opposed for example to PCA. The third and more interesting aspect relates with semantic similarity. A natural consequence of the mentioned algorithm is that words that have similar output distributions (i.e. next words) will tend to be close to each other. Figure FIGREF10 shows a 2D visualization (t-SNE) with a subset of english words. In such a visualization, data is projected in 2D space by maintaining the same vector-to-vector distances as in the original ($K$ order space). Therefore the X and Y axes have no specific meaning, only distances between every pair of points are relevant.",
"We can see that semantically similar concepts, more specifically concepts that tend to have the same distribution of “next words\", are placed closer. Another intriguing consequence is that, since the words are now in the $K$ dimensional, embeddings space, we can also do some linear algebra on them. A well known formulation is $King-Man+Woman=Queen$. Essentially, the vector $King-Man$ corresponds to the concept of “crowning\" (therefore $Woman+crowning=Queen$). The same could be done with many other concept pairs. Figure FIGREF11 show also an alternative interpretation of “man-female\", as well as examples with cities and verb tense.",
"Finally, another relevant note on the embeddings representation is that, just like the PCA encoding, one can always project back into the original space and use this for interpretability. In other words, since there is a 1-to-1 mapping from each category to its encoding, there is also a 1-to-1 mapping between a model that uses dummy variables and a model using such encodings. This may be useful for interpretability, since in the case of dummy variables we have a direct interpretation (e.g. a beta coefficient value in a logit model) for the effect of a given category, while the same doesn't happen for an encoded variable (i.e. there is no meaning for the value of a single beta coefficient in an embeddings encoding when K>1). In order to preserve statistical significance information (e.g. p-values) we only need to follow the well known rules of normal random variables.",
"There are open databases available (e.g. GLoVe BIBREF9, FastText BIBREF7) that provide word embedding tables for the entire English language (Glove provides several embedding tables, up to embedding size between 100 and 300). In our next word application example, we now talk about models with 500-1500 variables, which is very manageable for our machines today.",
"Summarizing, the general idea of word embeddings is to re-represent a categorical variable into a lower dimensional representation with continuous values . Whenever such a variable is to be used in a model, one can simply replace it with the corresponding embeddings vector. We have previously demonstrated the value of such word embeddings in demand prediction in special events BIBREF10, where we collected event textual descriptions, and used Glove embedding vectors to incorporate such information in a neural network model.",
"Finally, an interesting point to mention relates to the typical difference in dataset size between the original embeddings training model (Glove, approximately 6 billion input word vectors from 37 million texts) and the model one implements to solve a particular problem (in our special events case, less than 1000 short event descriptions, with at most few hundred words each). Instead of creating ourselves a new embeddings model using the events dataset, we reused the pre-trained GloVe dataset. The benefit is significant because in practice we trained our model to deal with all words in the dictionary, much beyond the limited vocabulary that we obtained in our 1000 short texts. In practice we have used a very small percentage of the english dictionary. When, in an out-of-sample test, our model finds words that were not in the training set, it still works perfectly well."
],
[
"Differently to textual data, our goal in this paper is to explore the large amount of categorical data that is often collected in travel surveys. This includes trip purpose, education level, or family type. We also consider other variables that are not necessarily of categorical nature, but typically end up as dummy encoding, due to segmentation, such as age, income, or even origin/destination pair.",
"Our hypothesis is that, given the limitations of dummy variables that are commonly used and the unsupervised nature of PCA, using instead an embeddings mechanism should improve significantly the quality of our models, both in terms of loglikelihood but also in terms of allowing for lower complexity (i.e. less variables). Ultimately, one could think of a framework such as GLoVe, where embeddings for such variables could be trivially shared with the community. For example, we could have a “Travel behavior embeddings\" database, incrementally built from travel surveys from around the world. Such database could have embeddings for mode choice target variables, but also for departure time, destination choice, car ownership, and so on. Whenever a modeler wanted to estimate a new model, she could just download the right encodings and use them directly. This is particularly relevant if one considers the complicated challenges for opening or sharing travel survey datasets in our field. Of course, a major question arises: are behaviors that consistent across the world? There are certainly nuances across the world, but we believe that general patterns would emerge (e.g. a “business\" trip purpose will be closer to “work\" than “leisure\", in a departure time choice model; “student\" will be closer to “unemployed\" than to “retired\" in a car ownership model)."
],
[
"We believe that, as with word embeddings, a mapping that preserves semantic distance relative to a certain choice problem, should be useful for modeling. As with a PCA encoding, another benefit is that by sharing parameters in the learning process, the model can generalize better, as opposed to a dummy encoding, where each categorical value has its own parameter, that is only active when observed.",
"The general idea is thus to create a mapping between a variable for which we want to find an embeddings representation, and a target variable, as in Figure FIGREF15. We call the mapping function “PyTre Embeddings\", because that is the name of the object in our proposed Python “Travel Embeddings\" package.",
"From an experimental design and application perspective, the approach followed in this paper is the following:",
"Create list of categorical variables to encode (the encoding set)",
"Split dataset into train, development and test sets",
"For each variable in encoding set, learn the new embeddings using the embeddings train set . This should be done simultaneously (all variable embeddings estimated at once, as explained in the next section).",
"Encode choice models for train, development and test sets using the learned embeddings",
"Estimate choice model accordingly using its train set",
"Evaluate the new model using the test set",
"Since there is stochasticity in the embeddings training model, we will repeat the above multiple times, for the different experiments in the paper (and report the respective mean and standard deviation statistics). Whenever we want to analyse a particular model (e.g. to check the coefficients of a choice model), we select the one with the highest likelihood at the development set (i.e. in practice, its out-of-sample generalization performance), and report its performance on the test set."
],
[
"Since a choice model will typically involve other variables than the categorical ones that we learn the embeddings for, it is important to take into account their effects. Figure FIGREF24 shows the simplest travel embeddings model. As an example, the categorical variable is trip purpose, and there are a few other variables such as gender, cost of the alternatives, distance, and so on. Notice that they are directly fed into the softmax output layer, together with the embeddings output.",
"The dataset sizes in transportation behavior modeling are substantially smaller than typical word embeddings ones, and the risk of overfitting is therefore higher. To mitigate this problem, besides adding regularization penalties in the objective function, we add what we call a regularizer layer for each embedding, which is no more than a softmax layer that penalizes whenever it cannot recover the original one-hot-encoding vectors (Figure FIGREF25, left). We call the combination of embeddings and its regularizer network, a Travel Embeddings layer. Finally, it is obviously better to train all embeddings simultaneously, so that they accommodate each other's effects (Figure FIGREF25, right)."
],
[
"The goal of this paper is to test the potential of embeddings in a simple and well-known choice model context, comparing it to well-known baseline techniques. Therefore, the general model specification follows quite simple assumptions. We expect that in future work from us or others, more elaborate derivations can take advantage of embeddings such as nested, mixed logit or latent class choice models (LCCM), for example.",
"We will apply the methodology to the well-known “Swissmetro\" dataset. We will compare it with a dummy variables and PCA baselines. We will follow the 3-way experimental design mentioned before: split the dataset into train, development and test sets, so that the embeddings, PCA eigenvectors and the choice model are estimated from the same train and development sets, and validate it out-of-sample. For the sake of interpretability, we will also project back coefficients from the embeddings as well as PCA models into the dummy variable space.",
"All experiment code is available as a jupyter notebook in a package we created for this work (to which we called PyTre). For estimating the multinomial logit model (MNL) we used the PyLogit BIBREF11 package."
],
[
"The Swissmetro dataset consists of survey data collected on the trains between St. Gallen and Geneva, Switzerland, during March 1998. According to its description BIBREF0, the respondents provided information in order to analyze the impact of the modal innovation in transportation, represented by the Swissmetro, a revolutionary mag-lev underground system, against the usual transport modes represented by car and train. After discarding respondents for which some variables were not available (e.g. age, purpose), a total of 10469 responses from 1188 individuals were used for the experiments.",
"We split the dataset into 3 different parts:",
"Embeddings train set: 60% of the dataset (6373 vectors)",
"Development set: 20% of the dataset (2003 vectors)",
"Test set: 20% of the dataset (2003 vectors)"
],
[
"The PyLogit package BIBREF11 also uses Swissmetro as an example. Therefore, our model specifications will extend the default one from this package. We re-estimated this model with the train set and validated with testset. The results are shown in tables TABREF31 and TABREF32. Since we are comparing the models at the test set, the key indicators should be pseudo R-square and log-likelihood. Indicators that consider model complexity (robust r-square and AIC) are less important on the test set in our view because the overfitting effect (i.e. improving fit just by adding more variables) will no longer be verifiable in this way. Instead, one sees overfitting if test set performance is considerably inferior to the training set."
]
]
}
|
{
"question": [
"What datasets are used for evaluation?",
"How do their train their embeddings?",
"How do they model travel behavior?",
"How do their interpret the coefficients?"
],
"question_id": [
"c45a160d31ca8eddbfea79907ec8e59f543aab86",
"7358a1ce2eae380af423d4feeaa67d2bd23ae9dd",
"1165fb0b400ec1c521c1aef7a4e590f76fee1279",
"f2c5da398e601e53f9f545947f61de5f40ede1ee"
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
""
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Swissmetro dataset"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The Swissmetro dataset consists of survey data collected on the trains between St. Gallen and Geneva, Switzerland, during March 1998. According to its description BIBREF0, the respondents provided information in order to analyze the impact of the modal innovation in transportation, represented by the Swissmetro, a revolutionary mag-lev underground system, against the usual transport modes represented by car and train. After discarding respondents for which some variables were not available (e.g. age, purpose), a total of 10469 responses from 1188 individuals were used for the experiments."
],
"highlighted_evidence": [
"The Swissmetro dataset consists of survey data collected on the trains between St. Gallen and Geneva, Switzerland, during March 1998. "
]
}
],
"annotation_id": [
"5ac34eb67f1f8386ca9654d0d56e6e970c8f6cde"
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "The embeddings are learned several times using the training set, then the average is taken.",
"evidence": [
"For each variable in encoding set, learn the new embeddings using the embeddings train set . This should be done simultaneously (all variable embeddings estimated at once, as explained in the next section).",
"Since there is stochasticity in the embeddings training model, we will repeat the above multiple times, for the different experiments in the paper (and report the respective mean and standard deviation statistics). Whenever we want to analyse a particular model (e.g. to check the coefficients of a choice model), we select the one with the highest likelihood at the development set (i.e. in practice, its out-of-sample generalization performance), and report its performance on the test set."
],
"highlighted_evidence": [
"For each variable in encoding set, learn the new embeddings using the embeddings train set .",
"Since there is stochasticity in the embeddings training model, we will repeat the above multiple times, for the different experiments in the paper (and report the respective mean and standard deviation statistics)."
]
}
],
"annotation_id": [
"e7fa4a9302fccb534138aec8e7fcdff69791ab63"
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "The data from collected travel surveys is used to model travel behavior.",
"evidence": [
"Differently to textual data, our goal in this paper is to explore the large amount of categorical data that is often collected in travel surveys. This includes trip purpose, education level, or family type. We also consider other variables that are not necessarily of categorical nature, but typically end up as dummy encoding, due to segmentation, such as age, income, or even origin/destination pair."
],
"highlighted_evidence": [
"Differently to textual data, our goal in this paper is to explore the large amount of categorical data that is often collected in travel surveys. This includes trip purpose, education level, or family type. We also consider other variables that are not necessarily of categorical nature, but typically end up as dummy encoding, due to segmentation, such as age, income, or even origin/destination pair."
]
}
],
"annotation_id": [
"135e6e05c3d4c16db9e073bdeb856ed2f91820a2"
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "The coefficients are projected back to the dummy variable space.",
"evidence": [
"We will apply the methodology to the well-known “Swissmetro\" dataset. We will compare it with a dummy variables and PCA baselines. We will follow the 3-way experimental design mentioned before: split the dataset into train, development and test sets, so that the embeddings, PCA eigenvectors and the choice model are estimated from the same train and development sets, and validate it out-of-sample. For the sake of interpretability, we will also project back coefficients from the embeddings as well as PCA models into the dummy variable space."
],
"highlighted_evidence": [
"For the sake of interpretability, we will also project back coefficients from the embeddings as well as PCA models into the dummy variable space."
]
}
],
"annotation_id": [
"cefa81dfd716c6568a263ac073777e97fc32f783"
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
]
}
|
{
"caption": [
"Figure 1: The skip gram architecture [7]",
"Figure 2: Visualization of a subset of words from FastText word embeddings database [8]",
"Figure 3: Some classical examples of embeddings algebra [9]",
"Figure 4: The general idea",
"Figure 5: Travel embeddings model",
"Figure 6: Travel embeddings model with regularization (left); Complete model, combining multiple travel embeddings layers (right).",
"Table 1: Multinomial Logit Model Regression Results - original model",
"Table 2: Multinomial Logit Model Regression coefficients - original model (**= p<0.05)",
"Table 3: New dimensionality (K) of encoding set variables",
"Figure 7: Embeddings model training performance",
"Figure 8: MDS visualizations of embeddings results",
"Figure 9: Switzerland’s cantons",
"Table 4: Testset results for embeddings model",
"Table 5: Multinomial Logit Model Regression Results - embeddings model (* = p<0.1; ** = p<0.05)",
"Table 6: Multinomial Logit Model Regression Results - embeddings model projected into dummy variable space (* = p<0.1; ** = p<0.05)",
"Table 7: Multinomial Logit Model Regression Results for dummy variable model with OD variables",
"Table 8: Multinomial Logit Model Regression Results for dummy variable model without OD variables",
"Table 9: Multinomial Logit Model Regression coefficients for dummy variable model without OD variables",
"Table 10: Results for PCA model",
"Table 11: Multinomial Logit Model Regression Results for PCA model",
"Table 12: Summary of results",
"Figure 10: R-square performance with percentage of “expensive\" survey. Left: light+detailed survey; Right: Big Data+detailed survey Note: Absence of data points means either negative R-squared, or model not possible to estimate (e.g. due to singular matrix)"
],
"file": [
"6-Figure1-1.png",
"7-Figure2-1.png",
"8-Figure3-1.png",
"10-Figure4-1.png",
"11-Figure5-1.png",
"12-Figure6-1.png",
"13-Table1-1.png",
"13-Table2-1.png",
"14-Table3-1.png",
"15-Figure7-1.png",
"16-Figure8-1.png",
"17-Figure9-1.png",
"17-Table4-1.png",
"18-Table5-1.png",
"19-Table6-1.png",
"21-Table7-1.png",
"21-Table8-1.png",
"22-Table9-1.png",
"23-Table10-1.png",
"24-Table11-1.png",
"24-Table12-1.png",
"25-Figure10-1.png"
]
}
|
1908.05434
|
Sex Trafficking Detection with Ordinal Regression Neural Networks
|
Sex trafficking is a global epidemic. Escort websites are a primary vehicle for selling the services of such trafficking victims and thus a major driver of trafficker revenue. Many law enforcement agencies do not have the resources to manually identify leads from the millions of escort ads posted across dozens of public websites. We propose an ordinal regression neural network to identify escort ads that are likely linked to sex trafficking. Our model uses a modified cost function to mitigate inconsistencies in predictions often associated with nonparametric ordinal regression and leverages recent advancements in deep learning to improve prediction accuracy. The proposed method significantly improves on the previous state-of-the-art on Trafficking-10K, an expert-annotated dataset of escort ads. Additionally, because traffickers use acronyms, deliberate typographical errors, and emojis to replace explicit keywords, we demonstrate how to expand the lexicon of trafficking flags through word embeddings and t-SNE.
|
{
"section_name": [
"Introduction",
"Related Work",
"Method",
"Word Embeddings",
"Gated-Feedback Recurrent Neural Network",
"Multi-Labeled Logistic Regression Layer",
"Experiments",
"Datasets",
"Comparison with Baselines",
"Ablation Test",
"Qualitative Analysis of Predictions",
"Emoji Analysis",
"Discussion",
"Acknowledgments",
"Hyperparameters of the proposed ordinal regression neural network",
"Access to the source materials"
],
"paragraphs": [
[
"Globally, human trafficking is one of the fastest growing crimes and, with annual profits estimated to be in excess of 150 billion USD, it is also among the most lucrative BIBREF0 . Sex trafficking is a form of human trafficking which involves sexual exploitation through coercion. Recent estimates suggest that nearly 4 million adults and 1 million children are being victimized globally on any given day; furthermore, it is estimated that 99 percent of victims are female BIBREF1 . Escort websites are an increasingly popular vehicle for selling the services of trafficking victims. According to a recent survivor survey BIBREF2 , 38% of underage trafficking victims who were enslaved prior to 2004 were advertised online, and that number rose to 75% for those enslaved after 2004. Prior to its shutdown in April 2018, the website Backpage was the most frequently used online advertising platform; other popular escort websites include Craigslist, Redbook, SugarDaddy, and Facebook BIBREF2 . Despite the seizure of Backpage, there were nearly 150,000 new online sex advertisements posted per day in the U.S. alone in late 2018 BIBREF3 ; even with many of these new ads being re-posts of existing ads and traffickers often posting multiple ads for the same victims BIBREF2 , this volume is staggering.",
"Because of their ubiquity and public access, escort websites are a rich resource for anti-trafficking operations. However, many law enforcement agencies do not have the resources to sift through the volume of escort ads to identify those coming from potential traffickers. One scalable and efficient solution is to build a statistical model to predict the likelihood of an ad coming from a trafficker using a dataset annotated by anti-trafficking experts. We propose an ordinal regression neural network tailored for text input. This model comprises three components: (i) a Word2Vec model BIBREF4 that maps each word from the text input to a numeric vector, (ii) a gated-feedback recurrent neural network BIBREF5 that sequentially processes the word vectors, and (iii) an ordinal regression layer BIBREF6 that produces a predicted ordinal label. We use a modified cost function to mitigate inconsistencies in predictions associated with nonparametric ordinal regression. We also leverage several regularization techniques for deep neural networks to further improve model performance, such as residual connection BIBREF7 and batch normalization BIBREF8 . We conduct our experiments on Trafficking-10k BIBREF9 , a dataset of escort ads for which anti-trafficking experts assigned each sample one of seven ordered labels ranging from “1: Very Unlikely (to come from traffickers)” to “7: Very Likely”. Our proposed model significantly outperforms previously published models BIBREF9 on Trafficking-10k as well as a variety of baseline ordinal regression models. In addition, we analyze the emojis used in escort ads with Word2Vec and t-SNE BIBREF10 , and we show that the lexicon of trafficking-related emojis can be subsequently expanded.",
"In Section SECREF2 , we discuss related work on human trafficking detection and ordinal regression. In Section SECREF3 , we present our proposed model and detail its components. In Section SECREF4 , we present the experimental results, including the Trafficking-10K benchmark, a qualitative analysis of the predictions on raw data, and the emoji analysis. In Section SECREF5 , we summarize our findings and discuss future work."
],
[
"Trafficking detection: There have been several software products designed to aid anti-trafficking efforts. Examples include Memex which focuses on search functionalities in the dark web; Spotlight which flags suspicious ads and links images appearing in multiple ads; Traffic Jam which seeks to identify patterns that connect multiple ads to the same trafficking organization; and TraffickCam which aims to construct a crowd-sourced database of hotel room images to geo-locate victims. These research efforts have largely been isolated, and few research articles on machine learning for trafficking detection have been published. Closest to our work is the Human Trafficking Deep Network (HTDN) BIBREF9 . HTDN has three main components: a language network that uses pretrained word embeddings and a long short-term memory network (LSTM) to process text input; a vision network that uses a convolutional network to process image input; and another convolutional network to combine the output of the previous two networks and produce a binary classification. Compared to the language network in HTDN, our model replaces LSTM with a gated-feedback recurrent neural network, adopts certain regularizations, and uses an ordinal regression layer on top. It significantly improves HTDN's benchmark despite only using text input. As in the work of E. Tong et al. ( BIBREF9 ), we pre-train word embeddings using a skip-gram model BIBREF4 applied to unlabeled data from escort ads, however, we go further by analyzing the emojis' embeddings and thereby expand the trafficking lexicon.",
"Ordinal regression: We briefly review ordinal regression before introducing the proposed methodology. We assume that the training data are INLINEFORM0 , where INLINEFORM1 are the features and INLINEFORM2 is the response; INLINEFORM3 is the set of INLINEFORM4 ordered labels INLINEFORM5 with INLINEFORM6 . Many ordinal regression methods learn a composite map INLINEFORM7 , where INLINEFORM8 and INLINEFORM9 have the interpretation that INLINEFORM10 is a latent “score” which is subsequently discretized into a category by INLINEFORM11 . INLINEFORM12 is often estimated by empirical risk minimization, i.e., by minimizing a loss function INLINEFORM13 averaged over the training data. Standard choices of INLINEFORM14 and INLINEFORM15 are reviewed by J. Rennie & N. Srebro ( BIBREF11 ).",
"Another common approach to ordinal regression, which we adopt in our proposed method, is to transform the label prediction into a series of INLINEFORM0 binary classification sub-problems, wherein the INLINEFORM1 th sub-problem is to predict whether or not the true label exceeds INLINEFORM2 BIBREF12 , BIBREF13 . For example, one might use a series of logistic regression models to estimate the conditional probabilities INLINEFORM3 for each INLINEFORM4 . J. Cheng et al. ( BIBREF6 ) estimated these probabilities jointly using a neural network; this was later extended to image data BIBREF14 as well as text data BIBREF15 , BIBREF16 . However, as acknowledged by J. Cheng et al. ( BIBREF6 ), the estimated probabilities need not respect the ordering INLINEFORM5 for all INLINEFORM6 and INLINEFORM7 . We force our estimator to respect this ordering through a penalty on its violation."
],
[
"Our proposed ordinal regression model consists of the following three components: Word embeddings pre-trained by a Skip-gram model, a gated-feedback recurrent neural network that constructs summary features from sentences, and a multi-labeled logistic regression layer tailored for ordinal regression. See Figure SECREF3 for a schematic. The details of its components and their respective alternatives are discussed below.",
" figure Overview of the ordinal regression neural network for text input. INLINEFORM0 represents a hidden state in a gated-feedback recurrent neural network."
],
[
"Vector representations of words, also known as word embeddings, can be obtained through unsupervised learning on a large text corpus so that certain linguistic regularities and patterns are encoded. Compared to Latent Semantic Analysis BIBREF17 , embedding algorithms using neural networks are particularly good at preserving linear regularities among words in addition to grouping similar words together BIBREF18 . Such embeddings can in turn help other algorithms achieve better performances in various natural language processing tasks BIBREF4 .",
"Unfortunately, the escort ads contain a plethora of emojis, acronyms, and (sometimes deliberate) typographical errors that are not encountered in more standard text data, which suggests that it is likely better to learn word embeddings from scratch on a large collection of escort ads instead of using previously published embeddings BIBREF9 . We use 168,337 ads scraped from Backpage as our training corpus and the Skip-gram model with Negative sampling BIBREF4 as our model."
],
[
"To process entire sentences and paragraphs after mapping the words to embeddings, we need a model to handle sequential data. Recurrent neural networks (RNNs) have recently seen great success at modeling sequential data, especially in natural language processing tasks BIBREF19 . On a high level, an RNN is a neural network that processes a sequence of inputs one at a time, taking the summary of the sequence seen so far from the previous time point as an additional input and producing a summary for the next time point. One of the most widely used variations of RNNs, a Long short-term memory network (LSTM), uses various gates to control the information flow and is able to better preserve long-term dependencies in the running summary compared to a basic RNN BIBREF20 . In our implementation, we use a further refinement of multi-layed LSTMs, Gated-feedback recurrent neural networks (GF-RNNs), which tend to capture dependencies across different timescales more easily BIBREF5 .",
"Regularization techniques for neural networks including Dropout BIBREF21 , Residual connection BIBREF7 , and Batch normalization BIBREF8 are added to GF-RNN for further improvements.",
"After GF-RNN processes an entire escort ad, the average of the hidden states of the last layer becomes the input for the multi-labeled logistic regression layer which we discuss next."
],
[
"As noted previously, the ordinal regression problem can be cast into a series of binary classification problems and thereby utilize the large repository of available classification algorithms BIBREF12 , BIBREF13 , BIBREF14 . One formulation is as follows. Given INLINEFORM0 total ranks, the INLINEFORM1 -th binary classifier is trained to predict the probability that a sample INLINEFORM2 has rank larger than INLINEFORM3 . Then the predicted rank is INLINEFORM4 ",
"In a classification task, the final layer of a deep neural network is typically a softmax layer with dimension equal to the number of classes BIBREF20 . Using the ordinal-regression-to-binary-classifications formulation described above, J. Cheng et al. ( BIBREF6 ) replaced the softmax layer in their neural network with a INLINEFORM0 -dimensional sigmoid layer, where each neuron serves as a binary classifier (see Figure SECREF7 but without the order penalty to be discussed later).",
"With the sigmoid activation function, the output of the INLINEFORM0 th neuron can be viewed as the predicted probability that the sample has rank greater than INLINEFORM5 . Alternatively, the entire sigmoid layer can be viewed as performing multi-labeled logistic regression, where the INLINEFORM6 th label is the indicator of the sample's rank being greater than INLINEFORM7 . The training data are thus re-formatted accordingly so that response variable for a sample with rank INLINEFORM8 becomes INLINEFORM9 k-1 INLINEFORM10 Y Y INLINEFORM11 Y - Y INLINEFORM12 J. Cheng et al.'s ( BIBREF6 ) final layer was preceded by a simple feed-forward network. In our case, word embeddings and GF-RNN allow us to construct a feature vector of fixed length from text input, so we can simply attach the multi-labeled logistic regression layer to the output of GF-RNN to complete an ordinal regression neural network for text input.",
"The violation of the monotonicity in the estimated probabilities (e.g., INLINEFORM0 for some INLINEFORM1 and INLINEFORM2 ) has remained an open issue since the original ordinal regression neural network proposal of J. Cheng et al ( BIBREF6 ). This is perhaps owed in part to the belief that correcting this issue would significantly increase training complexity BIBREF14 . We propose an effective and computationally efficient solution to avoid the conflicting predictions as follows: penalize such conflicts in the training phase by adding INLINEFORM3 ",
"to the loss function for a sample INLINEFORM0 , where INLINEFORM1 is a penalty parameter (Figure SECREF7 ). For sufficiently large INLINEFORM2 the estimated probabilities will respect the monotonicity condition; respecting this condition improves the interpretability of the predictions, which is vital in applications like the one we consider here as stakeholders are given the estimated probabilities. We also hypothesize that the order penalty may serve as a regularizer to improve each binary classifier (see the ablation test in Section SECREF15 ).",
" figure Ordinal regression layer with order penalty.",
"All three components of our model (word embeddings, GF-RNN, and multi-labeled logistic regression layer) can be trained jointly, with word embeddings optionally held fixed or given a smaller learning rate for fine-tuning. The hyperparameters for all components are given in the Appendix. They are selected according to either literature or grid-search."
],
[
"We first describe the datasets we use to train and evaluate our models. Then we present a detailed comparison of our proposed model with commonly used ordinal regression models as well as the previous state-of-the-art classification model by E. Tong et al. ( BIBREF9 ). To assess the effect of each component in our model, we perform an ablation test where the components are swapped by their more standard alternatives one at a time. Next, we perform a qualitative analysis on the model predictions on the raw data, which are scraped from a different escort website than the one that provides the labeled training data. Finally, we conduct an emoji analysis using the word embeddings trained on raw escort ads."
],
[
"We use raw texts scraped from Backpage and TNABoard to pre-train the word embeddings, and use the same labeled texts E. Tong et al. ( BIBREF9 ) used to conduct model comparisons. The raw text dataset consists of 44,105 ads from TNABoard and 124,220 ads from Backpage. Data cleaning/preprocessing includes joining the title and the body of an ad; adding white spaces around every emoji so that it can be tokenized properly; stripping tabs, line breaks, punctuations, and extra white spaces; removing phone numbers; and converting all letters to lower case. We have ensured that the raw dataset has no overlap with the labeled dataset to avoid bias in test accuracy. While it is possible to scrape more raw data, we did not observe significant improvements in model performances when the size of raw data increased from INLINEFORM0 70,000 to INLINEFORM1 170,000, hence we assume that the current raw dataset is sufficiently large.",
"The labeled dataset is called Trafficking-10k. It consists of 12,350 ads from Backpage labeled by experts in human trafficking detection BIBREF9 . Each label is one of seven ordered levels of likelihood that the corresponding ad comes from a human trafficker. Descriptions and sample proportions of the labels are in Table TABREF11 . The original Trafficking-10K includes both texts and images, but as mentioned in Section SECREF1 , only the texts are used in our case. We apply the same preprocessing to Trafficking-10k as we do to raw data."
],
[
"We compare our proposed ordinal regression neural network (ORNN) to Immediate-Threshold ordinal logistic regression (IT) BIBREF11 , All-Threshold ordinal logistic regression (AT) BIBREF11 , Least Absolute Deviation (LAD) BIBREF22 , BIBREF23 , and multi-class logistic regression (MC) which ignores the ordering. The primary evaluation metrics are Mean Absolute Error (MAE) and macro-averaged Mean Absolute Error ( INLINEFORM0 ) BIBREF24 . To compare our model with the previous state-of-the-art classification model for escort ads, the Human Trafficking Deep Network (HTDN) BIBREF9 , we also polarize the true and predicted labels into two classes, “1-4: Unlikely” and “5-7: Likely”; then we compute the binary classification accuracy (Acc.) as well as the weighted binary classification accuracy (Wt. Acc.) given by INLINEFORM1 ",
"Note that for applications in human trafficking detection, MAE and Acc. are of primary interest. Whereas for a more general comparison among the models, the class imbalance robust metrics, INLINEFORM0 and Wt. Acc., might be more suitable. Bootstrapping or increasing the weight of samples in smaller classes can improve INLINEFORM1 and Wt. Acc. at the cost of MAE and Acc..",
"The text data need to be vectorized before they can be fed into the baseline models (whereas vectorization is built into ORNN). The standard practice is to tokenize the texts using n-grams and then create weighted term frequency vectors using the term frequency (TF)-inverse document frequency (IDF) scheme BIBREF25 , BIBREF26 . The specific variation we use is the recommended unigram + sublinear TF + smooth IDF BIBREF26 , BIBREF27 . Dimension reduction techniques such as Latent Semantic Analysis BIBREF17 can be optionally applied to the frequency vectors, but B. Schuller et al. ( BIBREF28 ) concluded from their experiments that dimension reduction on frequency vectors actually hurts model performance, which our preliminary experiments agree with.",
"All models are trained and evaluated using the same (w.r.t. data shuffle and split) 10-fold cross-validation (CV) on Trafficking-10k, except for HTDN, whose result is read from the original paper BIBREF9 . During each train-test split, INLINEFORM0 of the training set is further reserved as the validation set for tuning hyperparameters such as L2-penalty in IT, AT and LAD, and learning rate in ORNN. So the overall train-validation-test ratio is 70%-20%-10%. We report the mean metrics from the CV in Table TABREF14 . As previous research has pointed out that there is no unbiased estimator of the variance of CV BIBREF29 , we report the naive standard error treating metrics across CV as independent.",
"We can see that ORNN has the best MAE, INLINEFORM0 and Acc. as well as a close 2nd best Wt. Acc. among all models. Its Wt. Acc. is a substantial improvement over HTDN despite the fact that the latter use both text and image data. It is important to note that HTDN is trained using binary labels, whereas the other models are trained using ordinal labels and then have their ordinal predictions converted to binary predictions. This is most likely the reason that even the baseline models except for LAD can yield better Wt. Acc. than HTDN, confirming our earlier claim that polarizing the ordinal labels during training may lead to information loss."
],
[
"To ensure that we do not unnecessarily complicate our ORNN model, and to assess the impact of each component on the final model performance, we perform an ablation test. Using the same CV and evaluation metrics, we make the following replacements separately and re-evaluate the model: 1. Replace word embeddings pre-trained from skip-gram model with randomly initialized word embeddings; 2. replace gated-feedback recurrent neural network with long short-term memory network (LSTM); 3. disable batch normalization; 4. disable residual connection; 5. replace the multi-labeled logistic regression layer with a softmax layer (i.e., let the model perform classification, treating the ordinal response variable as a categorical variable with INLINEFORM0 classes); 6. replace the multi-labeled logistic regression layer with a 1-dimensional linear layer (i.e., let the model perform regression, treating the ordinal response variable as a continuous variable) and round the prediction to the nearest integer during testing; 7. set the order penalty to 0. The results are shown in Table TABREF16 .",
"The proposed ORNN once again has all the best metrics except for Wt. Acc. which is the 2nd best. This suggests that each component indeed makes a contribution. Note that if we disregard the ordinal labels and perform classification or regression, MAE falls off by a large margin. Setting order penalty to 0 does not deteriorate the performance by much, however, the percent of conflicting binary predictions (see Section SECREF7 ) rises from 1.4% to 5.2%. So adding an order penalty helps produce more interpretable results."
],
[
"To qualitatively evaluate how well our model predicts on raw data and observe potential patterns in the flagged samples, we obtain predictions on the 44,105 unlabelled ads from TNABoard with the ORNN model trained on Trafficking-10k, then we examine the samples with high predicted likelihood to come from traffickers. Below are the top three samples that the model considers likely:",
"[itemsep=0pt]",
"“amazing reviewed crystal only here till fri book now please check our site for the services the girls provide all updates specials photos rates reviews njfantasygirls ...look who s back amazing reviewed model samantha...brand new spinner jessica special rate today 250 hr 21 5 4 120 34b total gfe total anything goes no limits...”",
"“2 hot toght 18y o spinners 4 amazing providers today specials...”",
"“asian college girl is visiting bellevue service type escort hair color brown eyes brown age 23 height 5 4 body type slim cup size c cup ethnicity asian service type escort i am here for you settle men i am a tiny asian girl who is waiting for a gentlemen...”",
"Some interesting patterns in the samples with high predicted likelihood (here we only showed three) include: mentioning of multiple names or INLINEFORM0 providers in a single ad; possibly intentional typos and abbreviations for the sensitive words such as “tight” INLINEFORM1 “toght” and “18 year old” INLINEFORM2 “18y o”; keywords that indicate traveling of the providers such as “till fri”, “look who s back”, and “visiting”; keywords that hint on the providers potentially being underage such as “18y o”, “college girl”, and “tiny”; and switching between third person and first person narratives."
],
[
"The fight against human traffickers is adversarial and dynamic. Traffickers often avoid using explicit keywords when advertising victims, but instead use acronyms, intentional typos, and emojis BIBREF9 . Law enforcement maintains a lexicon of trafficking flags mapping certain emojis to their potential true meanings (e.g., the cherry emoji can indicate an underaged victim), but compiling such a lexicon manually is expensive, requires frequent updating, and relies on domain expertise that is hard to obtain (e.g., insider information from traffickers or their victims). To make matters worse, traffickers change their dictionaries over time and regularly switch to new emojis to replace certain keywords BIBREF9 . In such a dynamic and adversarial environment, the need for a data-driven approach in updating the existing lexicon is evident.",
"As mentioned in Section SECREF5 , training a skip-gram model on a text corpus can map words (including emojis) used in similar contexts to similar numeric vectors. Besides using the vectors learned from the raw escort ads to train ORNN, we can directly visualize the vectors for the emojis to help identify their relationships, by mapping the vectors to a 2-dimensional space using t-SNE BIBREF10 (Figure FIGREF24 ).",
"We can first empirically assess the quality of the emoji map by noting that similar emojis do seem clustered together: the smileys near the coordinate (2, 3), the flowers near (-6, -1), the heart shapes near (-8, 1), the phones near (-2, 4) and so on. It is worth emphasizing that the skip-gram model learns the vectors of these emojis based on their contexts in escort ads and not their visual representations, so the fact that the visually similar emojis are close to one another in the map suggests that the vectors have been learned as desired.",
"The emoji map can assist anti-trafficking experts in expanding the existing lexicon of trafficking flags. For example, according to the lexicon we obtained from Global Emancipation Network, the cherry emoji and the lollipop emoji are both flags for underaged victims. Near (-3, -4) in the map, right next to these two emojis are the porcelain dolls emoji, the grapes emoji, the strawberry emoji, the candy emoji, the ice cream emojis, and maybe the 18-slash emoji, indicating that they are all used in similar contexts and perhaps should all be flags for underaged victims in the updated lexicon.",
"If we re-train the skip-gram model and update the emoji map periodically on new escort ads, when traffickers switch to new emojis, the map can link the new emojis to the old ones, assisting anti-trafficking experts in expanding the lexicon of trafficking flags. This approach also works for acronyms and deliberate typos."
],
[
"Human trafficking is a form of modern day slavery that victimizes millions of people. It has become the norm for sex traffickers to use escort websites to openly advertise their victims. We designed an ordinal regression neural network (ORNN) to predict the likelihood that an escort ad comes from a trafficker, which can drastically narrow down the set of possible leads for law enforcement. Our ORNN achieved the state-of-the-art performance on Trafficking-10K BIBREF9 , outperforming all baseline ordinal regression models as well as improving the classification accuracy over the Human Trafficking Deep Network BIBREF9 . We also conducted an emoji analysis and showed how to use word embeddings learned from raw text data to help expand the lexicon of trafficking flags.",
"Since our experiments, there have been considerable advancements in language representation models, such as BERT BIBREF30 . The new language representation models can be combined with our ordinal regression layer, replacing the skip-gram model and GF-RNN, to potentially further improve our results. However, our contributions of improving the cost function for ordinal regression neural networks, qualitatively analyzing patterns in the predicted samples, and expanding the trafficking lexicon through a data-driven approach are not dependent on a particular choice of language representation model.",
"As for future work in trafficking detection, we can design multi-modal ordinal regression networks that utilize both image and text data. But given the time and resources required to label escort ads, we may explore more unsupervised learning or transfer learning algorithms, such as using object detection BIBREF31 and matching algorithms to match hotel rooms in the images."
],
[
"We thank Cara Jones and Marinus Analytics LLC for sharing the Trafficking-10K dataset. We thank Praveen Bodigutla for his suggestions on Natural Language Processing literature."
],
[
"Word Embeddings: pretraining model type: Skip-gram; speedup method: negative sampling; number of negative samples: 100; noise distribution: unigram distribution raised to 3/4rd; batch size: 16; window size: 5; minimum word count: 5; number of epochs: 50; embedding size: 128; pretraining learning rate: 0.2; fine-tuning learning rate scale: 1.0.",
"GF-RNN: hidden size: 128; dropout: 0.2; number of layers: 3; gradient clipping norm: 0.25; L2 penalty: 0.00001; learning rate decay factor: 2.0; learning rate decay patience: 3; early stop patience: 9; batch size: 200; batch normalization: true; residual connection: true; output layer type: mean-pooling; minimum word count: 5; maximum input length: 120.",
"Multi-labeled logistic regression layer: task weight scheme: uniform; conflict penalty: 0.5."
],
[
"The fight against human trafficking is adversarial, hence the access to the source materials in anti-trafficking research is typically not available to the general public by choice, but granted to researchers and law enforcement individually upon request.",
"Source code:",
"https://gitlab.com/BlazingBlade/TrafficKill",
"Trafficking-10k: Contact",
"cara@marinusanalytics.com",
"Trafficking lexicon: Contact",
"sherrie@globalemancipation.ngo"
]
]
}
|
{
"question": [
"By how much do they outperform previous state-of-the-art models?",
"Do they use pretrained word embeddings?",
"How is the lexicon of trafficking flags expanded?"
],
"question_id": [
"2d4d0735c50749aa8087d1502ab7499faa2f0dd8",
"43761478c26ad65bec4f0fd511ec3181a100681c",
"01866fe392d9196dda1d0b472290edbd48a99f66"
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"",
"",
""
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Proposed ORNN has 0.769, 1.238, 0.818, 0.772 compared to 0.778, 1.244, 0.813, 0.781 of best state of the art result on Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.)",
"evidence": [
"All models are trained and evaluated using the same (w.r.t. data shuffle and split) 10-fold cross-validation (CV) on Trafficking-10k, except for HTDN, whose result is read from the original paper BIBREF9 . During each train-test split, INLINEFORM0 of the training set is further reserved as the validation set for tuning hyperparameters such as L2-penalty in IT, AT and LAD, and learning rate in ORNN. So the overall train-validation-test ratio is 70%-20%-10%. We report the mean metrics from the CV in Table TABREF14 . As previous research has pointed out that there is no unbiased estimator of the variance of CV BIBREF29 , we report the naive standard error treating metrics across CV as independent.",
"We can see that ORNN has the best MAE, INLINEFORM0 and Acc. as well as a close 2nd best Wt. Acc. among all models. Its Wt. Acc. is a substantial improvement over HTDN despite the fact that the latter use both text and image data. It is important to note that HTDN is trained using binary labels, whereas the other models are trained using ordinal labels and then have their ordinal predictions converted to binary predictions. This is most likely the reason that even the baseline models except for LAD can yield better Wt. Acc. than HTDN, confirming our earlier claim that polarizing the ordinal labels during training may lead to information loss.",
"FLOAT SELECTED: Table 2: Comparison of the proposed ordinal regression neural network (ORNN) against Immediate-Threshold ordinal logistic regression (IT), All-Threshold ordinal logistic regression (AT), Least Absolute Deviation (LAD), multi-class logistic regression (MC), and the Human Trafficking Deep Network (HTDN) in terms of Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.). The results are averaged across 10-fold CV on Trafficking10k with naive standard errors in the parentheses. The best and second best results are highlighted.",
"FLOAT SELECTED: Table 2: Comparison of the proposed ordinal regression neural network (ORNN) against Immediate-Threshold ordinal logistic regression (IT), All-Threshold ordinal logistic regression (AT), Least Absolute Deviation (LAD), multi-class logistic regression (MC), and the Human Trafficking Deep Network (HTDN) in terms of Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.). The results are averaged across 10-fold CV on Trafficking10k with naive standard errors in the parentheses. The best and second best results are highlighted."
],
"highlighted_evidence": [
"We report the mean metrics from the CV in Table TABREF14 .",
"We can see that ORNN has the best MAE, INLINEFORM0 and Acc. as well as a close 2nd best Wt. Acc. among all models.",
"FLOAT SELECTED: Table 2: Comparison of the proposed ordinal regression neural network (ORNN) against Immediate-Threshold ordinal logistic regression (IT), All-Threshold ordinal logistic regression (AT), Least Absolute Deviation (LAD), multi-class logistic regression (MC), and the Human Trafficking Deep Network (HTDN) in terms of Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.). The results are averaged across 10-fold CV on Trafficking10k with naive standard errors in the parentheses. The best and second best results are highlighted.",
"FLOAT SELECTED: Table 2: Comparison of the proposed ordinal regression neural network (ORNN) against Immediate-Threshold ordinal logistic regression (IT), All-Threshold ordinal logistic regression (AT), Least Absolute Deviation (LAD), multi-class logistic regression (MC), and the Human Trafficking Deep Network (HTDN) in terms of Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.). The results are averaged across 10-fold CV on Trafficking10k with naive standard errors in the parentheses. The best and second best results are highlighted."
]
}
],
"annotation_id": [
"1384b1e2ddc8d8417896cb3664c4586037474138"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"Trafficking detection: There have been several software products designed to aid anti-trafficking efforts. Examples include Memex which focuses on search functionalities in the dark web; Spotlight which flags suspicious ads and links images appearing in multiple ads; Traffic Jam which seeks to identify patterns that connect multiple ads to the same trafficking organization; and TraffickCam which aims to construct a crowd-sourced database of hotel room images to geo-locate victims. These research efforts have largely been isolated, and few research articles on machine learning for trafficking detection have been published. Closest to our work is the Human Trafficking Deep Network (HTDN) BIBREF9 . HTDN has three main components: a language network that uses pretrained word embeddings and a long short-term memory network (LSTM) to process text input; a vision network that uses a convolutional network to process image input; and another convolutional network to combine the output of the previous two networks and produce a binary classification. Compared to the language network in HTDN, our model replaces LSTM with a gated-feedback recurrent neural network, adopts certain regularizations, and uses an ordinal regression layer on top. It significantly improves HTDN's benchmark despite only using text input. As in the work of E. Tong et al. ( BIBREF9 ), we pre-train word embeddings using a skip-gram model BIBREF4 applied to unlabeled data from escort ads, however, we go further by analyzing the emojis' embeddings and thereby expand the trafficking lexicon."
],
"highlighted_evidence": [
"As in the work of E. Tong et al. ( BIBREF9 ), we pre-train word embeddings using a skip-gram model BIBREF4 applied to unlabeled data from escort ads, however, we go further by analyzing the emojis' embeddings and thereby expand the trafficking lexicon."
]
}
],
"annotation_id": [
"7a121e16f4f5def4e5700dfc4d6f588f03ac00a1"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"re-train the skip-gram model and update the emoji map periodically on new escort ads, when traffickers switch to new emojis, the map can link the new emojis to the old ones"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"If we re-train the skip-gram model and update the emoji map periodically on new escort ads, when traffickers switch to new emojis, the map can link the new emojis to the old ones, assisting anti-trafficking experts in expanding the lexicon of trafficking flags. This approach also works for acronyms and deliberate typos."
],
"highlighted_evidence": [
"If we re-train the skip-gram model and update the emoji map periodically on new escort ads, when traffickers switch to new emojis, the map can link the new emojis to the old ones, assisting anti-trafficking experts in expanding the lexicon of trafficking flags. This approach also works for acronyms and deliberate typos."
]
}
],
"annotation_id": [
"26f9aea7a6585b16f09cf6f41dfbf0a3f9f8db81"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
}
|
{
"caption": [
"Figure 1: Overview of the ordinal regression neural network for text input. H represents a hidden state in a gated-feedback recurrent neural network.",
"Figure 2: Ordinal regression layer with order penalty.",
"Table 1: Description and distribution of labels in Trafficking-10K.",
"Table 2: Comparison of the proposed ordinal regression neural network (ORNN) against Immediate-Threshold ordinal logistic regression (IT), All-Threshold ordinal logistic regression (AT), Least Absolute Deviation (LAD), multi-class logistic regression (MC), and the Human Trafficking Deep Network (HTDN) in terms of Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.). The results are averaged across 10-fold CV on Trafficking10k with naive standard errors in the parentheses. The best and second best results are highlighted.",
"Table 3: Ablation test. Except for models everything is the same as Table 2.",
"Figure 3: Emoji map produced by applying t-SNE to the emojis’ vectors learned from escort ads using skip-gram model. For visual clarity, only the emojis that appeared most frequently in the escort ads we scraped are shown out of the total 968 emojis that appeared."
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"6-Table1-1.png",
"7-Table2-1.png",
"7-Table3-1.png",
"8-Figure3-1.png"
]
}
|
1612.05310
|
Modeling Trolling in Social Media Conversations
|
Social media websites, electronic newspapers and Internet forums allow visitors to leave comments for others to read and interact. This exchange is not free from participants with malicious intentions, who troll others by positing messages that are intended to be provocative, offensive, or menacing. With the goal of facilitating the computational modeling of trolling, we propose a trolling categorization that is novel in the sense that it allows comment-based analysis from both the trolls' and the responders' perspectives, characterizing these two perspectives using four aspects, namely, the troll's intention and his intention disclosure, as well as the responder's interpretation of the troll's intention and her response strategy. Using this categorization, we annotate and release a dataset containing excerpts of Reddit conversations involving suspected trolls and their interactions with other users. Finally, we identify the difficult-to-classify cases in our corpus and suggest potential solutions for them.
|
{
"section_name": [
"Introduction",
"Related Work",
"Trolling Categorization",
"Conversation Excerpts",
"Corpus and Annotation",
"Trolling Attempt Prediction",
"Feature Sets",
"Results",
"Error Analysis",
"Conclusion and Future Work"
],
"paragraphs": [
[
"In contrast to traditional content distribution channels like television, radio and newspapers, Internet opened the door for direct interaction between the content creator and its audience. Young people are now gaining more frequent access to online, networked media. Although most of the time, their Internet use is harmless, there are some risks associated with these online activities, such as the use of social networking sites (e.g., Twitter, Facebook, Reddit). The anonymity and freedom provided by social networks makes them vulnerable to threatening situations on the Web, such as trolling.",
"Trolling is “the activity of posting messages via a communications network that are intended to be provocative, offensive or menacing” BIBREF0 . People who post such comments are known as trolls. According to hardaker2010trolling, a troll's “real intention(s) is/are to cause disruption and/or trigger or exacerbate conflict for the purpose of their own amusement”. Worse still, the troll's comments may have a negative psychological impact on his target/victim and possibly others who participated in the same conversation. It is therefore imperative to identify such comments and perhaps even terminate the conversation before it evolves into something psychological disruptive for the participants. Monitoring conversations is a labor-intensive task: it can potentially place a severe burden on the moderators, and it may not be an effective solution when traffic is heavy. This calls for the need to develop automatic methods for identifying malicious comments, which we will refer to as trolling attempts in this paper.",
"In fact, there have recently been some attempts to automatically identify comments containing cyberbullying (e.g., van2015detection), which corresponds to the most severe cases of trolling BIBREF0 . However, we believe that it is important not only to identify trolling attempts, but also comments that could have a negative psychological impact on their recipients. As an example, consider the situation where a commenter posts a comment with the goal of amusing others. However, it is conceivable that not everybody would be aware of these playful intentions, and these people may disagree or dislike the mocking comments and take them as inappropriate, prompting a negative reaction or psychological impact on themselves.",
"In light of this discussion, we believe that there is a need to identify not only the trolling attempts, but also comments that could have a negative psychological impact on its receipts. To this end, we seek to achieve the following goals in this paper. First, we propose a comprehensive categorization of trolling that allows us to model not only the troll's intention given his trolling attempt, but also the recipients' perception of the troll's intention and subsequently their reaction to the trolling attempt. This categorization gives rise to very interesting problems in pragmatics that involve the computational modeling of intentions, perceived intentions, and reactions to perceived intentions. Second, we create a new annotated resource for computational modeling of trolling. Each instance in this resource corresponds to a suspected trolling attempt taken from a Reddit conversation, it's surrounding context, and its immediate responses and will be manually coded with information such as the troll's intention and the recipients' reactions using our proposed categorization of trolling. Finally, we identify the instances that are difficult to classify with the help of a classifier trained with features taken from the state of the art, and subsequently present an analysis of these instances.",
"To our knowledge, our annotated resource is the first one of its sort that allows computational modeling on both the troll's side and the recipients' side. By making it publicly available, we hope to stimulate further research on this task. We believe that it will be valuable to any NLP researcher who is interested in the computational modeling of trolling."
],
[
"In this section, we discuss related work in the areas of trolling, bullying, abusive language detection and politeness, as they intersect in their scope and at least partially address the problem presented in this work.",
"In the realm of psychology, bishop2013effect and bishop2014representations elaborate a deep description of a troll's personality, motivations, effects on the community that trolls interfere in and the criminal and psychological aspects of trolls. Their main focus are flaming (trolls), and hostile and aggressive interactions between users BIBREF1 .",
"On the computational side, mihaylov2015finding address the problem of identifying manipulation trolls in news community forums. Not only do they focus solely on troll identification, but the major difference with this work is that all their predictions are based on non-linguistic information such as number of votes, dates, number of comments and so on. In a networks related framework, kumar2014accurately and guha2004propagation present a methodology to identify malicious individuals in a network based solely on the network's properties rather than on the textual content of comments. cambria2010not propose a method that involves NLP components, but fail to provide an evaluation of their system.",
"There is extensive work on detecting offensive and abusive language in social media BIBREF2 and BIBREF3 . There are two clear differences between their work and ours. One is that trolling is concerned about not only abusive language but also a much larger range of language styles and addresses the intentions and interpretations of the commenters, which goes beyond the linguistic dimension. The other is that we are additionally interested in the reactions to trolling attempts, real or perceived, because we argued that this is a phenomenon that occurs in pairs through the interaction of at least two individuals, which is different from abusive language detection. Also, xu2012learning, xu2012fast and xu2013examination address bullying traces. Bullying traces are self-reported events of individuals describing being part of bullying events, but we believe that the real impact of computational trolling research is not on analyzing retrospective incidents, but on analyzing real-time conversations. chen2012detecting use lexical and semantic features to determine sentence offensiveness levels to identify cyberbullying, offensive or abusive comments on Youtube. On Youtube as well, dinakar2012common identified sensitive topics for cyberbullying. dadvar2014experts used expert systems to classify between bullying and no bullying in posts. van2015detection predict fine-grained categories for cyberbullying, distinguishing between insults and threats and identified user roles in the exchanges. Finally, hardaker2010trolling argues that trolling cannot be studied using established politeness research categories."
],
[
"In this section, we describe our proposal of a comprehensive trolling categorization. While there have been attempts in the realm of psychology to provide a working definition of trolling (e.g., hardaker2010trolling, bishop2014representations), their focus is mostly on modeling the troll's behavior. For instance, bishop2014representations constructed a “trolling magnitude” scale focused on the severity of abuse and misuse of internet mediated communications. bishop2013effect also categorized trolls based on psychological characteristics focused on pathologies and possible criminal behaviors. In contrast, our trolling categorization seeks to model not only the troll's behavior but also the impact on the recipients, as described below.",
"Since one of our goals is to identify trolling events, our datasets will be composed of suspected trolling attempts (i.e., comments that are suspected to be trolling attempts). In other words, some of these suspected trolling attempts will be real trolling attempts, and some of them won't. So, if a suspected trolling attempt is in fact not a trolling attempt, then its author will not be a troll.",
"To cover both the troll and the recipients, we define a (suspected trolling attempt, responses) pair as the basic unit that we consider for the study of trolling, where “responses” are all the direct responses to the suspected trolling attempt. We characterize a (suspected trolling attempt, responses) pair using four aspects. Two aspects describe the trolling attempt: (1) Intention (I) (what is its author's purpose?), and (2) Intention Disclosure (D) (is its author trying to deceive its readers by hiding his real (i.e., malicious) intentions?). The remaining two aspects are defined on each of the (direct) responses to the trolling attempt: (1) Intention Interpretation (R) (what is the responder's perception of the troll's intention?), and (2) the Response strategy (B) (what is the responder's reaction?). Two points deserve mention. First, R can be different from I due to misunderstanding and the fact that the troll may be trying to hide his intention. Second, B is influenced by R, and the responder's comment can itself be a trolling attempt. We believe that these four aspects constitute interesting, under-studied pragmatics tasks for NLP researchers.",
"The possible values of each aspect are described in Table TABREF1 . As noted before, since these are suspected trolling attempts, if an attempt turns out not to be a trolling attempt, its author will not be a troll.",
"For a given (suspected trolling attempt, responses) pair, not all of the 189 (= INLINEFORM0 ) combinations of values of the four aspects are possible. There are logical constraints that limit plausible combinations: a) Trolling or Playing Intentions (I) must have Hidden or Exposed Intention Disclosure (D), b) Normal intentions (I) can only have None Intention disclosure (D) and c) Trolling or Playing interpretation (R) cannot have Normal response strategy (B)."
],
[
"To enable the reader to better understand this categorization, we present two example excerpts taken from the original (Reddit) conversations. The first comment on each excerpt, generated by author C0, is given as a minimal piece of context. The second comment, written by the author C1 in italics, is the suspected trolling attempt. The rest of the comments comprise all direct responses to the suspected trolling comment.",
"Example 1.",
"",
"[noitemsep,nolistsep] ",
"Yeah, cause that's what usually happens. Also, quit following me around, I don't want a boyfriend.",
"[noitemsep,nolistsep]",
"I wasn't aware you were the same person.... I've replied to a number of stupid people recently, my bad",
"[noitemsep,nolistsep]",
"Trollname trollpost brotroll",
"",
"In this example, C1 is teasing of C0, expecting to provoke or irritate irritate, and he is clearly disclosing her trolling intentions. In C0's response, we see that he clearly believe that C1 is trolling, since is directly calling him a “brotroll” and his response strategy is frustrate the trolling attempt by denouncing C1 troll's intentions “trollpost” and true identity “brotroll”.",
"Example 2.",
"",
"[noitemsep,nolistsep] ",
"Please post a video of your dog doing this. The way I'm imagining this is adorable.",
"[noitemsep,nolistsep]",
"I hope the dog gets run over by a truck on the way out of the childrens playground.",
"[noitemsep,nolistsep]",
"If you're going to troll, can you at least try to be a bit more",
"Haha I hope the cancer kills you. convincing?",
"",
"In this example, we observe that C0's first comment is making a polite request (Please). In return, C1 answer is a mean spirited comment whose intention is to disrupt and possible hurtful C0. Also, C1's comment is not subtle at all, so his intention is clearly disclosed. As for C2, she is clearly acknowledging C1's trolling intention and her response strategy is a criticism which we categorize as frustrate. Now, in C0's second comment, we observe that his interpretation is clear, he believes that C1 is trolling and the negative effect is so tangible, that his response strategy is to troll back or counter-troll by replying with a comparable mean comment."
],
[
"Reddit is popular website that allows registered users (without identity verification) to participate in fora grouped by topic or interest. Participation consists of posting stories that can be seen by other users, voting stories and comments, and comments in the story's comment section, in the form of a forum. The forums are arranged in the form of a tree, allowing nested conversations, where the replies to a comment are its direct responses. We collected all comments in the stories' conversation in Reddit that were posted in August 2015. Since it is infeasible to manually annotate all of the comments, we process this dataset with the goal of extracting threads that involve suspected trolling attempts and the direct responses to them. To do so, we used Lucene to create an inverted index from the comments and queried it for comments containing the word “troll” with an edit distance of 1 in order to include close variations of this word, hypothesizing that such comments would be reasonable candidates of real trolling attempts. We did observe, however, that sometimes people use the word troll to point out that another user is trolling. Other times, people use the term to express their frustration about a particular user, but there is no trolling attempt. Yet other times people simply discuss trolling and trolls without actually observing one. Nonetheless, we found that this search produced a dataset in which 44.3% of the comments are real trolling attempts. Moreover, it is possible for commenters to believe that they are witnessing a trolling attempt and respond accordingly even where there is none due to misunderstanding. Therefore, the inclusion of comments that do not involve trolling would allow us to learn what triggers a user's interpretation of trolling when it is not present and what kind of response strategies are used.",
"For each retrieved comment, we reconstructed the original conversation tree it appears in, from the original post (i.e., the root) to the leaves, so that its parent and children can be recovered. We consider a comment in our dataset a suspected trolling attempt if at least one of its immediate children contains the word troll. For annotation purposes, we created snippets of conversations exactly like the ones shown in Example 1 and Example 2, each of which consists of the parent of the suspected trolling attempt, the suspected trolling attempt, and all of the direct responses to the suspected trolling attempt.",
"We had two human annotators who were trained on snippets (i.e., (suspected trolling attempt, responses) pairs) taken from 200 conversations and were allowed to discuss their findings. After this training stage, we asked them to independently label the four aspects for each snippet. We recognize that this limited amount of information is not always sufficient to recover the four aspects we are interested in, so we give the annotators the option to discard instances for which they couldn't determine the labels confidently. The final annotated dataset consists of 1000 conversations composed of 6833 sentences and 88047 tokens. The distribution over the classes per trolling aspect is shown in the table TABREF19 in the column “Size”.",
"Due to the subjective nature of the task we did not expect perfect agreement. However, on the 100 doubly-annotated snippets, we obtained substantial inter-annotator agreement according to Cohen's kappa statistic BIBREF4 for each of the four aspects: Intention: 0.788, Intention Disclosure: 0.780, Interpretation: 0.797 and Response 0.776. In the end, the annotators discussed their discrepancies and managed to resolve all of them."
],
[
"In this section, we make predictions on the four aspects of our task, with the primary goal of identifying the errors our classifier makes (i.e., the hard-to-classify instances) and hence the directions for future work, and the secondary goal of estimating the state of the art on this new task using only shallow (i.e., lexical and wordlist-based) features."
],
[
"For prediction we define two sets of features: (1) a basic feature set taken from Van Hee's van2015detection paper on cyberbullying prediction, and (2) an extended feature set that we designed using primarily information extracted from wordlists and dictionaries.",
"N-gram features. We encode each lemmatized and unlemmatized unigram and bigram collected from the training comments as a binary feature. In a similar manner, we include the unigram and bigram along with their POS tag as in BIBREF5 . To extract these features we used Stanford CoreNLP BIBREF6 .",
"Sentiment Polarity. The overall comment's emotion could be useful to identify the response and intention in a trolling attempt. So, we apply the Vader Sentiment Polarity Analyzer BIBREF7 and include four features, one per each measurement given by the analyzer: positive, neutral, negative and a composite metric, each as a real number value.",
"Emoticons. Reddit's comments make extensive use of emoticons. We argue that some emoticons are specifically used in trolling attempts to express a variety of emotions, which we hypothesize would be useful to identify a comment's intention, interpretation and response. For that reason, we use the emoticon dictionary developed hogenboom2015exploiting. We create a binary feature whose value is one if at least one of these emoticons is found in the comment.",
"Harmful Vocabulary. In their research on bullying, nitta2013detecting identified a small set of words that are highly offensive. We create a binary feature whose value is one if the comment contains at least one of these words.",
"Emotions Synsets. As in xu2012fast, we extracted all lemmas associated with each WordNet BIBREF8 synset involving seven emotions (anger, embarrassment, empathy, fear, pride, relief and sadness) as well as the synonyms of these emotion words extracted from the English merriam2004merriam dictionary. We create a binary feature whose value is one if any of these synsets or synonyms appears in the comment.",
"Swearing Vocabulary. We manually collected 1061 swear words and short phrases from the internet, blogs, forums and smaller repositories . The informal nature of this dictionary resembles the type of language used by flaming trolls and agitated responses, so we encode a binary feature whose value is one when at least one such swear word is found in the comment.",
"Swearing Vocabulary in Username. An interesting feature that is suggestive of the intention of a comment is the author's username. We found that abusive and annoying commenters contained cursing words in their usernames. So, we create a binary feature whose value is one if a swear word from the swearing vocabulary is found in their usernames.",
"Framenet. We apply the SEMAFOR parser BIBREF9 to each sentence in every comment, and construct three different types of binary features: every frame name that is present in the sentence, the frame name and the target word associated with it, and the argument name along with the token or lexical unit in the sentence associated with it. We believe that some frames are especially interesting from the trolling perspective. We hypothesize that these features are useful for identifying trolling attempts in which semantic and not just syntactic information is required.",
"Politeness cues. danescu2013computational identified cues that signal polite and impolite interactions among groups of people collaborating online. Based on our observations of trolling examples, it is clear that flaming, hostile and aggressive interactions between users BIBREF1 and engaged or emotional responses would use impolite cues. In contrast, neutralizing and frustrating responses to the troll avoid falling in confrontation and their vocabulary tends to be more polite. So we create a binary feature whose value is one if at least one cue appears in the comment.",
"GloVe Embeddings. All the aforementioned features constitute a high dimensional bag of words (BOW). Word embeddings were created to overcome certain problems with the BOW representation, like sparsity, and weight in correlations of semantically similar words. For this reason, and following nobata2016abusive, we create a distributed representation of the comments by averaging the word vector of each lowercase token in the comment found in the Twitter corpus pre-trained GloVe vectors BIBREF10 . The resulting comment vector representation is a 200 dimensional array that is concatenated with the existing BOW."
],
[
"Using the features described in the previous subsection, we train four independent classifiers using logistic regression, one per each of the four prediction tasks. All the results are obtained using 5-fold cross-validation experiments. In each fold experiment, we use three folds for training, one fold for development, and one fold for testing. All learning parameters are set to their default values except for the regularization parameter, which we tuned on the development set. In Table TABREF19 the leftmost results column reports F1 score based on majority class prediction. The next section (Single Feature Group) reports F1 scores obtained by using one feature group at a time. The goal of the later set of experiments is to gain insights about feature predictive effectiveness. The right side section (All features) shows the system performance measured using recall, precision, and F-1 as shown when all features described in section SECREF13 are used.",
"The majority class prediction experiment is simplest baseline to which we can can compare the rest of the experiments. In order to illustrate the prediction power of each feature group independent from all others, we perform the “Single Feature Group”, experiments. As we can observe in Table TABREF19 there are groups of features that independently are not better than the majority baseline, for example, the emoticons, politeness cues and polarity are not better disclosure predictors than the majority base. Also, we observe that only n-grams and GloVe features are the only group of features that contribute to more than a class type for the different tasks. Now, the “All Features” experiment shows how the interaction between feature sets perform than any of the other features groups in isolation. The accuracy metric for each trolling task is meant to provide an overall performance for all the classes within a particular task, and allow comparison between different experiments. In particular, we observe that GloVe vectors are the most powerful feature set, accuracy-wise, even better than the experiments with all features for all tasks except interpretation.",
"The overall Total Accuracy score reported in table TABREF19 using the entire feature set is 549. This result is what makes this dataset interesting: there is still lots of room for research on this task. Again, the primary goal of this experiment is to help identify the difficult-to-classify instances for analysis in the next section."
],
[
"In order to provide directions for future work, we analyze the errors made by the classifier trained on the extended features on the four prediction tasks.",
"Errors on Intention (I) prediction: The lack of background is a major problem when identifying trolling comments. For example, “your comments fit well in Stormfront” seems inoffensive on the surface. However, people who know that Stormfront is a white supremacist website will realize that the author of this comment had an annoying or malicious intention. But our system had no knowledge about it and simply predicted it as non-trolling. These kind of errors reduces recall on the prediction of trolling comments. A solution would be to include additional knowledge from anthologies along with a sentiment or polarity. One could modify NELL BIBREF12 to broaden the understanding of entities in the comments.",
"Non-cursing aggressions and insults This is a challenging problem, since the majority of abusive and insulting comments rely on profanity and swearing. The problem arises with subtler aggressions and insults that are equally or even more annoying, such as “Troll? How cute.” and “settle down drama queen”. The classifier has a more difficult task of determining that these are indeed aggressions or insults. This error also decreases the recall of trolling intention. A solution would be to exploit all the comments made by the suspected troll in the entire conversation in order to increase the chances of finding curse words or other cues that lead the classifier to correctly classify the comment as trolling.",
"Another source of error is the presence of controversial topic words such as “black”,“feminism”, “killing”, “racism”, “brown”, etc. that are commonly used by trolls. The classifier seems too confident to classify a comment as trolling in the presence of these words, but in many cases they do not. In order to ameliorate this problem, one could create ad-hoc word embeddings by training glove or other type of distributed representation on a large corpus for the specific social media platform in consideration. From these vectors one could expect a better representation of controversial topics and their interactions with other words so they might help to reduce these errors.",
"Errors on Disclosure (D) prediction: A major source of error that affects disclosure is the shallow meaning representation obtained from the BOW model even when augmented with the distributional features given by the glove vectors. For example, the suspected troll's comment “how to deal with refugees? How about a bullet to the head” is clearly mean-spirited and is an example of disclosed trolling. However, to reach that conclusion the reader need to infer the meaning of “bullet to the head” and that this action is desirable for a vulnerable group like migrants or refugees. This problem produces low recall for the disclosed prediction task. A solution for this problem may be the use of deeper semantics, where we represent the comments and sentences in their logical form and infer from them the intended meaning.",
"Errors on Interpretation (R) prediction: it is a common practice from many users to directly ask the suspected troll if he/she is trolling or not. There are several variations of this question, such as “Are you a troll?” and “not sure if trolling or not”. While the presence of a question like these seems to give us a hint of the responder's interpretation, we cannot be sure of his interpretation without also considering the context. One way to improve interpretation is to exploit the response strategy, but the response strategy in our model is predicted independently of interpretation. So one solution could be similar to the one proposed above for the disclosure task problem: jointly learning classifiers that predict both variables simultaneously. Another possibility is to use the temporal sequence of response comments and make use of older response interpretation as input features for later comments. This could be useful since commenters seem to influence each other as they read through the conversation.",
"Errors on Response Strategy (B) prediction: In some cases there is a blurry line between “Frustrate” and “Neutralize”. The key distinction between them is that there exists some criticism in the Frustrate responses towards the suspected troll's comment, while “Neutralizing” comments acknowledge that the suspected troll has trolling intentions, but gives no importance to them. For example, response comments such as “oh, you are a troll” and “you are just a lame troll” are examples of this subtle difference. The first is a case of “neutralize” while the second is indeed criticizing the suspected troll's comment and therefore a “frustrate” response strategy. This kind of error affects both precision and recall for these two classes. A possible solution could be to train a specialized classifier to disambiguate between “frustrate” and “neutralize” only.",
"Another challenging problem is the distinction between the classes “Troll” and “Engage”. This is true when the direct responder is intensely flared up with the suspected comment to the point that his own comment becomes a trolling attempt. A useful indicator for distinguishing these cases are the presence of insults, and to detect them we look for swear words, but as we noted before, there is no guarantee that swear words are used for insulting. This kind of error affects the precision and recall for the “troll” and “engage” classes. A solution to this problem may be the inclusion of longer parts of the conversation. It is typical in a troll-engaged comment scheme to observe longer than usual exchanges between two users, and the comments evolve in very agitated remarks. One may then use this information to disambiguate between the two classes."
],
[
"We presented a new view on the computational modeling of trolling in Internet fora where we proposed a comprehensive categorization of trolling attempts that for the first time considers trolling from not only the troll's perspective but also the responders' perspectives. This categorization gives rise to four interesting pragmatics tasks that involve modeling intensions, perceived intensions, and reactions. Perhaps most importantly, we create an annotated dataset that we believe is the first of its sort. We intend to make publicly available with the hope of stimulating research on trolling."
]
]
}
|
{
"question": [
"Do they experiment with the dataset?",
"Do they use a crowdsourcing platform for annotation?",
"What is an example of a difficult-to-classify case?",
"What potential solutions are suggested?",
"What is the size of the dataset?",
"What Reddit communities do they look at?"
],
"question_id": [
"394cf73c0aac8ccb45ce1b133f4e765e8e175403",
"2c4003f25e8d95a3768204f52a7a5f5e17cb2102",
"65e32f73357bb26a29a58596e1ac314f7e9c6c91",
"46f175e1322d648ab2c0258a9609fe6f43d3b44e",
"7cc22fd8c9d0e1ce5e86d0cbe90bf3a177f22a68",
"3fa638e6167e1c7a931c8ee5c0e2e397ec1b6cda"
],
"nlp_background": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
"",
""
],
"search_query": [
"social media",
"social media",
"social media",
"social media",
"social media",
"social media"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"The overall Total Accuracy score reported in table TABREF19 using the entire feature set is 549. This result is what makes this dataset interesting: there is still lots of room for research on this task. Again, the primary goal of this experiment is to help identify the difficult-to-classify instances for analysis in the next section."
],
"highlighted_evidence": [
"The overall Total Accuracy score reported in table TABREF19 using the entire feature set is 549. "
]
}
],
"annotation_id": [
"ea5e04a335216985caf9fe97f2ce836a48a80650"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [
"Reddit is popular website that allows registered users (without identity verification) to participate in fora grouped by topic or interest. Participation consists of posting stories that can be seen by other users, voting stories and comments, and comments in the story's comment section, in the form of a forum. The forums are arranged in the form of a tree, allowing nested conversations, where the replies to a comment are its direct responses. We collected all comments in the stories' conversation in Reddit that were posted in August 2015. Since it is infeasible to manually annotate all of the comments, we process this dataset with the goal of extracting threads that involve suspected trolling attempts and the direct responses to them. To do so, we used Lucene to create an inverted index from the comments and queried it for comments containing the word “troll” with an edit distance of 1 in order to include close variations of this word, hypothesizing that such comments would be reasonable candidates of real trolling attempts. We did observe, however, that sometimes people use the word troll to point out that another user is trolling. Other times, people use the term to express their frustration about a particular user, but there is no trolling attempt. Yet other times people simply discuss trolling and trolls without actually observing one. Nonetheless, we found that this search produced a dataset in which 44.3% of the comments are real trolling attempts. Moreover, it is possible for commenters to believe that they are witnessing a trolling attempt and respond accordingly even where there is none due to misunderstanding. Therefore, the inclusion of comments that do not involve trolling would allow us to learn what triggers a user's interpretation of trolling when it is not present and what kind of response strategies are used.",
"We had two human annotators who were trained on snippets (i.e., (suspected trolling attempt, responses) pairs) taken from 200 conversations and were allowed to discuss their findings. After this training stage, we asked them to independently label the four aspects for each snippet. We recognize that this limited amount of information is not always sufficient to recover the four aspects we are interested in, so we give the annotators the option to discard instances for which they couldn't determine the labels confidently. The final annotated dataset consists of 1000 conversations composed of 6833 sentences and 88047 tokens. The distribution over the classes per trolling aspect is shown in the table TABREF19 in the column “Size”."
],
"highlighted_evidence": [
"Since it is infeasible to manually annotate all of the comments, we process this dataset with the goal of extracting threads that involve suspected trolling attempts and the direct responses to them. ",
"We had two human annotators who were trained on snippets (i.e., (suspected trolling attempt, responses) pairs) taken from 200 conversations and were allowed to discuss their findings. After this training stage, we asked them to independently label the four aspects for each snippet. "
]
}
],
"annotation_id": [
"76357c9c4f5a08b96237b1d71756118497627f4f"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"The lack of background",
"Non-cursing aggressions and insults",
"the presence of controversial topic words ",
" shallow meaning representation",
"directly ask the suspected troll if he/she is trolling or not",
"a blurry line between “Frustrate” and “Neutralize”",
"distinction between the classes “Troll” and “Engage”"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In order to provide directions for future work, we analyze the errors made by the classifier trained on the extended features on the four prediction tasks.",
"Errors on Intention (I) prediction: The lack of background is a major problem when identifying trolling comments. For example, “your comments fit well in Stormfront” seems inoffensive on the surface. However, people who know that Stormfront is a white supremacist website will realize that the author of this comment had an annoying or malicious intention. But our system had no knowledge about it and simply predicted it as non-trolling. These kind of errors reduces recall on the prediction of trolling comments. A solution would be to include additional knowledge from anthologies along with a sentiment or polarity. One could modify NELL BIBREF12 to broaden the understanding of entities in the comments.",
"Non-cursing aggressions and insults This is a challenging problem, since the majority of abusive and insulting comments rely on profanity and swearing. The problem arises with subtler aggressions and insults that are equally or even more annoying, such as “Troll? How cute.” and “settle down drama queen”. The classifier has a more difficult task of determining that these are indeed aggressions or insults. This error also decreases the recall of trolling intention. A solution would be to exploit all the comments made by the suspected troll in the entire conversation in order to increase the chances of finding curse words or other cues that lead the classifier to correctly classify the comment as trolling.",
"Another source of error is the presence of controversial topic words such as “black”,“feminism”, “killing”, “racism”, “brown”, etc. that are commonly used by trolls. The classifier seems too confident to classify a comment as trolling in the presence of these words, but in many cases they do not. In order to ameliorate this problem, one could create ad-hoc word embeddings by training glove or other type of distributed representation on a large corpus for the specific social media platform in consideration. From these vectors one could expect a better representation of controversial topics and their interactions with other words so they might help to reduce these errors.",
"Errors on Disclosure (D) prediction: A major source of error that affects disclosure is the shallow meaning representation obtained from the BOW model even when augmented with the distributional features given by the glove vectors. For example, the suspected troll's comment “how to deal with refugees? How about a bullet to the head” is clearly mean-spirited and is an example of disclosed trolling. However, to reach that conclusion the reader need to infer the meaning of “bullet to the head” and that this action is desirable for a vulnerable group like migrants or refugees. This problem produces low recall for the disclosed prediction task. A solution for this problem may be the use of deeper semantics, where we represent the comments and sentences in their logical form and infer from them the intended meaning.",
"Errors on Interpretation (R) prediction: it is a common practice from many users to directly ask the suspected troll if he/she is trolling or not. There are several variations of this question, such as “Are you a troll?” and “not sure if trolling or not”. While the presence of a question like these seems to give us a hint of the responder's interpretation, we cannot be sure of his interpretation without also considering the context. One way to improve interpretation is to exploit the response strategy, but the response strategy in our model is predicted independently of interpretation. So one solution could be similar to the one proposed above for the disclosure task problem: jointly learning classifiers that predict both variables simultaneously. Another possibility is to use the temporal sequence of response comments and make use of older response interpretation as input features for later comments. This could be useful since commenters seem to influence each other as they read through the conversation.",
"Errors on Response Strategy (B) prediction: In some cases there is a blurry line between “Frustrate” and “Neutralize”. The key distinction between them is that there exists some criticism in the Frustrate responses towards the suspected troll's comment, while “Neutralizing” comments acknowledge that the suspected troll has trolling intentions, but gives no importance to them. For example, response comments such as “oh, you are a troll” and “you are just a lame troll” are examples of this subtle difference. The first is a case of “neutralize” while the second is indeed criticizing the suspected troll's comment and therefore a “frustrate” response strategy. This kind of error affects both precision and recall for these two classes. A possible solution could be to train a specialized classifier to disambiguate between “frustrate” and “neutralize” only.",
"Another challenging problem is the distinction between the classes “Troll” and “Engage”. This is true when the direct responder is intensely flared up with the suspected comment to the point that his own comment becomes a trolling attempt. A useful indicator for distinguishing these cases are the presence of insults, and to detect them we look for swear words, but as we noted before, there is no guarantee that swear words are used for insulting. This kind of error affects the precision and recall for the “troll” and “engage” classes. A solution to this problem may be the inclusion of longer parts of the conversation. It is typical in a troll-engaged comment scheme to observe longer than usual exchanges between two users, and the comments evolve in very agitated remarks. One may then use this information to disambiguate between the two classes."
],
"highlighted_evidence": [
"In order to provide directions for future work, we analyze the errors made by the classifier trained on the extended features on the four prediction tasks.\n\nErrors on Intention (I) prediction: The lack of background is a major problem when identifying trolling comments.",
"Non-cursing aggressions and insults This is a challenging problem, since the majority of abusive and insulting comments rely on profanity and swearing. ",
"Another source of error is the presence of controversial topic words such as “black”,“feminism”, “killing”, “racism”, “brown”, etc. that are commonly used by trolls.",
"Errors on Disclosure (D) prediction: A major source of error that affects disclosure is the shallow meaning representation obtained from the BOW model even when augmented with the distributional features given by the glove vectors.",
"Errors on Interpretation (R) prediction: it is a common practice from many users to directly ask the suspected troll if he/she is trolling or not. ",
"Errors on Response Strategy (B) prediction: In some cases there is a blurry line between “Frustrate” and “Neutralize”. ",
"Another challenging problem is the distinction between the classes “Troll” and “Engage”. "
]
}
],
"annotation_id": [
"29b2916971ecf070449e09aadfb6715f4cad53ec"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
" inclusion of longer parts of the conversation"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Another challenging problem is the distinction between the classes “Troll” and “Engage”. This is true when the direct responder is intensely flared up with the suspected comment to the point that his own comment becomes a trolling attempt. A useful indicator for distinguishing these cases are the presence of insults, and to detect them we look for swear words, but as we noted before, there is no guarantee that swear words are used for insulting. This kind of error affects the precision and recall for the “troll” and “engage” classes. A solution to this problem may be the inclusion of longer parts of the conversation. It is typical in a troll-engaged comment scheme to observe longer than usual exchanges between two users, and the comments evolve in very agitated remarks. One may then use this information to disambiguate between the two classes."
],
"highlighted_evidence": [
"This kind of error affects the precision and recall for the “troll” and “engage” classes. A solution to this problem may be the inclusion of longer parts of the conversation. "
]
}
],
"annotation_id": [
"139f3d416ba32e78ad435ed102dc234b1c898cdd"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"1000 conversations composed of 6833 sentences and 88047 tokens"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We had two human annotators who were trained on snippets (i.e., (suspected trolling attempt, responses) pairs) taken from 200 conversations and were allowed to discuss their findings. After this training stage, we asked them to independently label the four aspects for each snippet. We recognize that this limited amount of information is not always sufficient to recover the four aspects we are interested in, so we give the annotators the option to discard instances for which they couldn't determine the labels confidently. The final annotated dataset consists of 1000 conversations composed of 6833 sentences and 88047 tokens. The distribution over the classes per trolling aspect is shown in the table TABREF19 in the column “Size”."
],
"highlighted_evidence": [
"The final annotated dataset consists of 1000 conversations composed of 6833 sentences and 88047 tokens. "
]
}
],
"annotation_id": [
"a01202588764d81374be8fb96d9c4e5a45aefdec"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"175130f8de4381c0aa9f17a799617e6d33036a28"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
]
}
|
{
"caption": [
"Table 1: Classes for trolling aspects: Intention, Intention Disclosure, Intention Interpretation and Response Strategy. Size refers to the percentage per class, in parenthesis is the total number of instances in the dataset.",
"Table 2: Experiments Results. Below the “mjr” header, we report F1 scores the the majority class prediction we report F1 scores for the four aspects of trolling: Intention, Intentions Disclosure, Interpretation, and Response strategy. Also, below the “Single Feature Group” header, we report F1 scores as before, when the feature group indicated in the column headers is the only feature group used for classifier. The column headers abbreviations stand for: Emoticons, Harmful Vocabulary, Emotion Synsets, Swearing Vocabulary, Swearing Vocabulary in Usernames, Framenet, Politeness cues, n-grams (actual n-grams and n-grams appended with their corresponding part of speech tag) and Glove embeddings in that order. Below the “All Features” header we report Recall, Precision and F1 score, respectively, when all features are use for prediction. All experiments are performed using a logistic regression classifier per task. The last column reports the class distribution in percentage per task. The last row of each trolling aspect reports accuracy (the percentage of instances correctly classified). The last row in the table reports total accuracy, the percentage of correctly classified instances considering all aspects."
],
"file": [
"3-Table1-1.png",
"7-Table2-1.png"
]
}
|
1912.09713
|
Measuring Compositional Generalization: A Comprehensive Method on Realistic Data
|
State-of-the-art machine learning methods exhibit limited compositional generalization. At the same time, there is a lack of realistic benchmarks that comprehensively measure this ability, which makes it challenging to find and evaluate improvements. We introduce a novel method to systematically construct such benchmarks by maximizing compound divergence while guaranteeing a small atom divergence between train and test sets, and we quantitatively compare this method to other approaches for creating compositional generalization benchmarks. We present a large and realistic natural language question answering dataset that is constructed according to this method, and we use it to analyze the compositional generalization ability of three machine learning architectures. We find that they fail to generalize compositionally and that there is a surprisingly strong negative correlation between compound divergence and accuracy. We also demonstrate how our method can be used to create new compositionality benchmarks on top of the existing SCAN dataset, which confirms these findings.
|
{
"section_name": [
"Introduction",
"Distribution-Based Compositionality Assessment (DBCA)",
"Distribution-Based Compositionality Assessment (DBCA) ::: Principles for measuring compositionality",
"The CFQ Dataset",
"The CFQ Dataset ::: Automatic, rule-based generation",
"The CFQ Dataset ::: Dataset details and statistics",
"Compositionality Experiments for CFQ and scan",
"Experimental Results and Analysis ::: Experiment Setup",
"Experimental Results and Analysis ::: Results and analysis for CFQ",
"Experimental Results and Analysis ::: Results and analysis for scan",
"Related Work",
"Conclusion and Outlook",
"Example Dataset Item",
"Data Quality Analysis",
"Data Distribution Analysis ::: Answer frequencies",
"Data Distribution Analysis ::: Impact of subsampling on the distribution of complexity levels",
"Data Distribution Analysis ::: Impact of subsampling on the frequency of rules and rule combinations",
"Divergence-Based Split Analysis ::: Qualitative analysis of MCD@!START@$_{1}$@!END@",
"Divergence-Based Split Analysis ::: Quantitative analysis of MCD@!START@$_{1}$@!END@",
"Hyperparameters",
"Detailed error analysis ::: Breakdown of error types",
"Detailed error analysis ::: Qualitative error analysis",
"Additional experimental results on scan",
"Analysis of relations between accuracy, compound divergence, and training size",
"Logical Form",
"Rule Format",
"Rule Format ::: Grammar rule format",
"Rule Format ::: Knowledge rule format",
"Rule Format ::: Inference rule format",
"Rule Format ::: Resolution rule format",
"Generation Algorithm",
"Generation Algorithm ::: Join by Logical Form",
"Generation Algorithm ::: Relationship between Generation and Parsing",
"Generation Algorithm ::: Selecting an appropriate sample set",
"Example of a rule application DAG",
"Example of a rule application DAG ::: DAG normalization",
"Example of a rule application DAG ::: Concept abbreviations",
"Example of a rule application DAG ::: Entity placeholders",
"Example of a rule application DAG ::: Subgraphs and their weights",
"Rules Index",
"Rules Index ::: Grammar rules",
"Rules Index ::: Inference rules",
"Rules Index ::: Resolution rules",
"Rules Index ::: Knowledge rules"
],
"paragraphs": [
[
"Human intelligence exhibits systematic compositionality BIBREF0, the capacity to understand and produce a potentially infinite number of novel combinations of known components, i.e., to make “infinite use of finite means” BIBREF1. In the context of learning from a set of training examples, we can observe compositionality as compositional generalization, which we take to mean the ability to systematically generalize to composed test examples of a certain distribution after being exposed to the necessary components during training on a different distribution.",
"Humans demonstrate this ability in many different domains, such as natural language understanding (NLU) and visual scene understanding. For example, we can learn the meaning of a new word and then apply it to other language contexts. As BIBREF2 put it: “Once a person learns the meaning of a new verb `dax', he or she can immediately understand the meaning of `dax twice' and `sing and dax'.” Similarly, we can learn a new object shape and then understand its compositions with previously learned colors or materials BIBREF3, BIBREF4.",
"In contrast, state-of-the-art machine learning (ML) methods often fail to capture the compositional structure that is underlying the problem domain and thus fail to generalize compositionally BIBREF2, BIBREF5, BIBREF6, BIBREF7, BIBREF3. We believe that part of the reason for this shortcoming is a lack of realistic benchmarks that comprehensively measure this aspect of learning in realistic scenarios.",
"As others have proposed, compositional generalization can be assessed using a train-test split based on observable properties of the examples that intuitively correlate with their underlying compositional structure. BIBREF8, for example, propose to test on different output patterns than are in the train set, while BIBREF2 propose, among others, to split examples by output length or to test on examples containing primitives that are rarely shown during training. In this paper, we formalize and generalize this intuition and make these contributions:",
"We introduce distribution-based compositionality assessment (DBCA), which is a novel method to quantitatively assess the adequacy of a particular dataset split for measuring compositional generalization and to construct splits that are ideally suited for this purpose (Section SECREF2).",
"We present the Compositional Freebase Questions (CFQ) , a simple yet realistic and large NLU dataset that is specifically designed to measure compositional generalization using the DBCA method, and we describe how to construct such a dataset (Section SECREF3).",
"We use the DBCA method to construct a series of experiments for measuring compositionality on CFQ and scan BIBREF2 and to quantitatively compare these experiments to other compositionality experiments (Section SECREF4).",
"We analyze the performance of three baseline ML architectures on these experiments and show that these architectures fail to generalize compositionally, and perhaps more surprisingly, that compound divergence between train and test sets is a good predictor of the test accuracy (Section SECREF5).",
""
],
[
"",
"Like other authors, we propose to measure a learner's ability to generalize compositionally by using a setup where the train and test sets come from different distributions. More specifically, we propose a setup where each example is obtained by composing primitive elements (atoms), and where these atoms are similarly represented in the train and test sets while the test set contains novel compounds, i.e., new ways of composing the atoms of the train set.",
"As a simple illustrative scenario, consider the task of answering simple questions such as “Who directed Inception?” and “Did Christopher Nolan produce Goldfinger?”. In this scenario, the atoms intuitively correspond to the primitive elements that are used to compose those questions, such as the predicates “direct(ed)” and “produce(d)”, the question patterns “Who [predicate] [entity]” and “Did [entity1] [predicate] [entity2]”, and the entities “Inception”, “Christopher Nolan”, etc. The compounds on the other hand correspond to the combinations of these atoms that appear in the various examples: \"Who directed [entity]?\", \"Did Christopher Nolan [predicate] Inception?\", etc.",
"To measure compositional generalization on such a task, one might therefore use the questions “Who directed Inception?” and “Did Christopher Nolan produce Goldfinger?” as training examples while testing on questions such as “Did Christopher Nolan direct Goldfinger?” and \"Who produced Inception?\" because the atoms are identically represented in the train and test sets while the compounds differ.",
"To make this intuition more precise, we focus on datasets such as CFQ (introduced in Section SECREF3) and scan BIBREF2, where each example can be created from a formal set of rules by successively applying a number of these rules. In this case, the atoms are the individual rules, while the compounds are the subgraphs of the directed acyclic graphs (DAGs) that correspond to the rule applications. (See Sections SECREF3 and SECREF4 for more details.)",
""
],
[
"We use the term compositionality experiment to mean a particular way of splitting the data into train and test sets with the goal of measuring compositional generalization. Based on the notions of atoms and compounds described above, we say that an ideal compositionality experiment should adhere to the following two principles:",
"",
"Similar atom distribution: All atoms present in the test set are also present in the train set, and the distribution of atoms in the train set is as similar as possible to their distribution in the test set.",
"",
"Different compound distribution: The distribution of compounds in the train set is as different as possible from the distribution in the test set.",
"",
"The second principle guarantees that the experiment is compositionally challenging in the sense that it tests the learner on compounds that are as different as possible from the compounds used during training. The first principle aims to guarantee that the experiment is exclusively measuring the effect of the difference in the way atoms are composed to form compounds (rather than some related but different property such as domain adaptation on the distribution of the atoms).",
"To determine to which degree a certain experiment adheres to these principles, we use the following formalization. For a sample set $T$, we use $\\mathcal {F}_A(T)$ to denote the frequency distribution of atoms in $T$ and $\\mathcal {F}_C(T)$ for the weighted frequency distribution of compounds in $T$, which correspond to the subgraphs of the rule application DAGs. For practicality, we do not consider all subgraphs of rule application DAGs when computing the compound divergence. Instead, we first generate a large subset $\\mathbb {G}$ of subgraphs, then weight them in context of their occurrence, and keep only the ones with highest sum of weights. The purpose of the weighting is to avoid double-counting compounds that are highly correlated with some of their super-compounds. We achieve this by calculating the weight of $G \\in \\mathbb {G}$ in a sample as $w(G) = \\max _{g \\in \\text{occ}(G)} (1 - \\max _{G^{\\prime }: g \\prec g^{\\prime } \\in \\text{occ}(G^{\\prime })} P(G^{\\prime }| G))$, where $\\text{occ}(G)$ is the set of all occurrences of $G$ in the sample, $\\prec $ denotes the strict subgraph relation, and $P(G^{\\prime }| G)$ is the empirical probability of $G^{\\prime }$ occurring as a supergraph of $G$ over the full sample set. See Appendix SECREF202 for example subgraphs and more details on the weighting.",
"We measure divergence (or similarity) of the weighted distributions using the Chernoff coefficient $C_\\alpha (P \\Vert Q) = \\sum _{k} p_k^\\alpha \\, q_k^{1-\\alpha } \\in [0, 1]$ BIBREF9. For the atom divergence, we use $\\alpha =0.5$, which corresponds to the Bhattacharyya coefficient and reflects the desire of making the atom distributions in train and test as similar as possible. For the compound divergence, we use $\\alpha = 0.1$, which reflects the intuition that it is more important whether a certain compound occurs in $P$ (train) than whether the probabilities in $P$ (train) and $Q$ (test) match exactly. This allows us to formally define as follows the notions of compound divergence $\\mathcal {D}_C$ and atom divergence $\\mathcal {D}_A$ of a compositionality experiment consisting of a train set $V$ and a test set $W$:",
"Based on these principles, we suggest to use as a preferred compositionality benchmark for a given dataset the accuracy obtained by a learner on splits with maximum compound divergence and low atom divergence (we use $\\mathcal {D}_A \\le 0.02$). See Section SECREF4 for details about how to construct such splits."
],
[
"We present the Compositional Freebase Questions (CFQ) as an example of how to construct a dataset that is specifically designed to measure compositional generalization using the DBCA method introduced above. CFQ is a simple yet realistic, large dataset of natural language questions and answers that also provides for each question a corresponding sparql query against the Freebase knowledge base BIBREF10. This means that CFQ can be used for semantic parsing BIBREF11, BIBREF12, which is the task that we focus on in this paper."
],
[
"BIBREF13 describe a number of benefits for automated rule-based dataset generation, including scalability, control of scope, and avoidance of human errors. Beyond these benefits, however, such an approach is particularly attractive in the context of measuring compositional generalization using the DBCA method, as it allows us to precisely track the atoms (rules) and compounds (rule applications) of each example by recording the sequence of rule applications used to generate it.",
"Since the way we measure compositionality depends on how the examples can be broken down into atoms and compounds, we design the generation rules so as to have few and meaningful atoms. More precisely, we aim to have as few rules as possible so that the richness of the examples comes from composing them, which yields a large variety of compounds (enabling a large range of different compound divergences) while making it easy to obtain similar distributions of atoms. Also, we aim to make our rules truly “atomic” in the sense that the behavior of any rule is independent of the context where it is applied (e.g., rules may not contain “if-then-else” constructs).",
"In order to minimize the number of rules, we use an intermediate logical form that serves as a uniform semantic representation with relatively direct mappings to natural language and sparql. Our rules thus fall into the following four categories (a selection of rules is provided in Appendix SECREF20):",
"Grammar rules that generate natural language constructs and corresponding logical forms.",
"Inference rules that describe transformations on logical forms, allowing us to factor out transformations that are independent of specific linguistic and sparql constructs.",
"Resolution rules that map constructs of the logical form to sparql constructs.",
"Knowledge rules that supply logical form expressions that are universally applicable. Other rules can be kept more generic by parameterizing them on knowledge.",
"These rules define a language of triples of the form $\\langle \\text{question, logical form, \\textsc {sparql}{} query} \\rangle $. Our generation algorithm produces such triples in a mixed top-down and bottom-up fashion. We first apply grammar rules and inference rules to produce the natural language questions and their semantics in our logical form. Then we apply resolution rules to obtain the sparql query. See Figure FIGREF14 for an illustration. In addition, the generator produces a normalized, directed acyclic graph (DAG) of rule applications that corresponds to the normalized program that generated the triple. (Appendix SECREF19 shows an example.) Edges of this DAG represent dependencies among the rule applications, and the normalization ensures that a certain rule combination is represented using the same DAG across all the examples where it occurs.",
"The described approach can generate a potentially infinite set of questions, from which we first sample randomly and then subsample (to maximize the overall diversity of rule combinations while keeping a uniform distribution over complexity). We measure the diversity of rule combinations using the empirical entropy of a weighted subset of the rule application DAGs, and we use the number of rule applications as a measure of the complexity of an example. We also limit the maximum example complexity such that the questions remain relatively natural. Table TABREF15 shows examples of generated questions at varying levels of complexity. An example of a complete data item is shown in Appendix SECREF8, a more detailed data quality analysis is presented in Appendix SECREF9, and the generation algorithm is discussed in more detail in Appendix SECREF18."
],
[
"Input and output. While the primary focus of the dataset is semantic parsing (natural language question to sparql query), we also provide natural language answers for each question. This allows the dataset to be used in a text-in-text-out scenario as well (see Appendix SECREF8).",
"Ambiguity. We largely avoid ambiguity in the questions. In particular, we make sure each name is used to refer to exactly one entity, and we avoid different possible parse trees, different interpretations of plurals, and the need for disambiguation that requires semantic knowledge.",
"Scope. We select the following language features as compositional building blocks: open questions and closed questions; subordinate clauses; active and passive voice; conjunctions of verb phrases and of noun phrases; possessives with roles (“X's parent”); adjectives; and type restrictions. For knowledge base features, we select roles, verbs, types, and adjectives from domains that are well-represented in Freebase and that can be combined easily. We start from the popular movie domain (e.g., directing, producing, editor, sequel) and extend this with personal relations (e.g., parent, spouse, sibling), companies (e.g., founding, employer), and adjectives (e.g., gender, nationality).",
"Logical form and grammar. For the internal logical form, we adopt a variation of the description logic $\\mathcal {EL}$ BIBREF14, BIBREF15, augmented with additional constructors (see Appendix SECREF16) to more easily map to certain linguistic structures. For the grammar rules, we use a unification-based grammar syntax similar to that used in the Prolog extension GULP 3.1 BIBREF16, with addition of support for disjunction, negation, absence, and default inheritance of features for compactness of representation.",
"Grounding in Freebase. Once an example is generated by the CFQ rules, it still contains entity placeholders instead of Freebase machine ids (MIDs). For the task of semantic parsing, the examples could theoretically be used as-is, as our avoidance of semantic ambiguity means that a learner should not need knowledge of the specific entity in order to parse the question. To make the questions natural, however, we apply an additional step of replacing the placeholders with appropriate specific entities. To do this we first execute the generated sparql query against Freebase. This returns a set of candidate MID combinations that satisfy the query and can be used as substitutes. If the set is empty, we abandon the generated question candidate as unnatural. Otherwise, we pick one combination at random to yield a question with positive answer. In |